First quarter Achievements in 2015


Hello Guys

I am Mustafa El-Masry Principal Database Administrator & DB Analyst I am Production DBA and Development DBA also I am author and technical Writer to know more about me Visit my site from HERE
200 POST
Thank God by the end of year 2014 i completed one Hundred blog Post Published on the biggest SQL Server Community in the middle East SQL SERVER performance Tuning  you can check my posts on it from HERE  and in the first Quarter in 2015 i published 200 post in SQL Server technology in my Community {SQL DATABASE ADMINISTRATION}  you can check this list of my post from HERE you can check also the annual report from WordPress about my Community activity from HERE , and you can check my first arabic post in SQL Server 2014 from HERE  who love reading can check my technical document in SQL Server 2012 , 2014 Technology from HERE and at the end i am still new SQL Server Speaker i published four technical videos in SQL Server technology you can check it from HERE
Download all My Technical POSTS
DayMonth
Yesr2015
2014

SQL Server DMV Replication Monitoring Part 3


cloud-monitoring

Hello my followers  in the previous post i published part two in the series of (SQL Server DMV Replication Monitoring) and i explained very interested point for any DBA working on replication solutions by simple DMV depend on dynamic Query technique and on SQL Server cursor technology.

Summary of the previous posts

First Post :

  1. Publication Status
  2. Subscription Status
  3. Send Email by the Current status

Second Post :

  1. Replication information for the Complete setup.
  2. Find publication article.

Replication in SQL Server Series  

1     2      3       4 

DMV Replication Monitoring Series  1  2  3

Today i will Explain one new point it is very helpful and attractive , How we can Check Distributor AGENT Status

by the below Stored procedure we can Execute it from SQL Server agent to send to us notification mail if the Distributor JOB(s) FAILING OR STOPPED.

[sql]

USE [MSDB]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
create procedure [dbo].[replDistributorStatusGet]
@pRecipients varchar(255) = 'SQLGULF-DBConsultant@outlook.com'
AS
begin
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
DECLARE @is_sysadmin INT
DECLARE @JOB_owner sysname
DECLARE @JOB_id uniqueidentifier
DECLARE @job_name sysname
DECLARE @running int
DECLARE @cnt int
DECLARE @msg varchar(8000)
DECLARE @msg_header varchar(4000)
DECLARE @categoryid int

SELECT @job_owner = SUSER_SNAME()
 ,@is_sysadmin = 1
 ,@running = 0
 ,@categoryid = 10

CREATE TABLE #jobStatus (job_id UNIQUEIDENTIFIER NOT NULL,
 last_run_date INT ,
 last_run_time INT ,
 next_run_date INT ,
 next_run_time INT ,
 next_run_schedule_id INT ,
 requested_to_run INT ,
 request_source INT ,
 request_source_id sysname COLLATE database_default NULL,
 running int ,
 current_step INT ,
 current_retry_attempt INT ,
 job_state INT)

INSERT INTO #jobStatus
 EXECUTE master.dbo.xp_sqlagent_enum_jobs @is_sysadmin, @job_owner--, @job_id

--
select j.name, js.command, jss.running
from msdb.dbo.sysjobsteps js
 join msdb.dbo.sysjobs j on js.job_id = j.job_id
 join #jobStatus jss on js.job_id = jss.job_id
where step_id = 1 and subsystem = 'Distribution'
 and command like '%-Continuous'
 and jss.running <> 1
 if @@ROWCOUNT > 0
 BEGIN

 SELECT @msg_header = 'Distributor job(s) FAILING OR STOPPED. Please check replication job(s) ASAP.'
 SELECT @msg_header = @msg_header + char(10)
 SELECT @msg_header = @msg_header + 'Here is the list of Job(s) that are failing or stopped'
 SELECT @msg_header = @msg_header + char(10)
 SELECT @msg_header = @msg_header + '******************************************************************'

 set @msg = ''
 select @msg = @msg + CHAR(10) + j.name
 from msdb.dbo.sysjobsteps js
 join msdb.dbo.sysjobs j on js.job_id = j.job_id
 join #jobStatus jss on js.job_id = jss.job_id
 where step_id = 1 and subsystem = 'Distribution'
 and command like '%-Continuous'
 and jss.running <> 1

 SELECT @msg = @msg_header + char(10) + nullif(@msg,'')

 print @msg

 exec msdb.dbo.sp_send_dbmail
 @recipients= @pRecipients ,
 @subject= 'Production Replication Distributor Alert',
 @body = @msg
 END
drop table #jobStatus
end
GO

[/sql]

Now create new SQL Server agent job and execute this Stored procedure on it

[sql]

USE [msdb]
GO

DECLARE @return_value int

EXEC @return_value = [dbo].[replDistributorStatusGet]
 @pRecipients = N'SQLGULF-DBConsultant@outlook.com'
SELECT 'Return Value' = @return_value

GO

[/sql]

Follow me to know more SQL Server DMV in Replication Monitoring

Follow  the Author :

View all my tips , LinkedIn Website Slideshare ,Youtube Channel.

Durable vs. Non-durable tables: In-Memory OLTP


Hello followers Today i will explain very important point Durable vs. Non-durable tables SQL Server 2014 when come for us with In-Memory  Feature come with 2 Kind of tables Durable and Non-durable tables The default one is Durable Table. One cool feature besides the durable in-memory tables, is the possibility to create non-durable in-memory tables. As you realize, you can get a lot of performance improvement with the in-memory solution, but what is the benefit with non-durable tables as the data will be lost in case of failure? Well, one option is to use non-durable in-memory tables as staging tables in your ETL solution as you don’t care about the staging data in case of a system crash. do you have Temporary data ? do you have data you are OK to lose the data due to SQL Server restart at this time you can think in Non-durable tables, 

Memory-Optimized-Table come with two options (DURABILITY = SCHEMA_AND_DATA) , (DURABILITY = SCHEMA_ONLY)

Durable Table          : DURABILITY = SCHEMA_AND_DATA

Non-durable tables :DURABILITY = SCHEMA_ONLY

Complete the post to know more about the Scripts used in the DEMO

Continue reading “Durable vs. Non-durable tables: In-Memory OLTP”

How to Avoid the performance risk of the Delete Statements


Introduction

downloadHello everybody in my last post i explained How we can create Clustered index on all heap tables by one single click for video check this link today i will write and speak in new something How we can avoid the performance risk of the Delete Statement, YES delete statement can do big trouble on my server if i have one table with huge volume of data and more transaction hitting this table this meaning this critical table you should be Sensitive  with this critical  databases or this critical objects , because if you need to delete huge data from this table Based on certain criteria AS Example you need to delete 1,000,000 of record.

Problem

At this time when you need to delete 1,000,000 with some where condition and you are deleted from one of the critical table on your server don’t take the wrong way to write one Delete statement direct it will cost more Sync_network_IO and more CXPACKET also probably it can make Schema lock or Query lock .

Solution :

images

Forget the direct delete statement no think here ! you should do the delete statement as patching:

  1. grabbing any unique ID from Target table and insert it into Temp table.
  2. looping on the temp table to return each time the first 1000 record as Example.
  3. using Merge technology to delete the Data exists on target table where the ID equal the ID in source table.
  4. Update the counter of the looping
  5. commit the transaction if it success rollback transaction  if  it fail

by this way the statement of the delete will run smoothly without any bad affect on the SQL Server cluster performance. don’t think in the time you should think in the impact no problem for Query take time without impact because it is better than fast query Executed in 3 SEC but it  fire the CPU or the IO of the server when it run.

Things to consider while working with big tables.

  1. Use Truncate table, if you need to delete all
  2. If you are deleting records more than 70% of data, I would create a temp table, copy only the records you need to this table and run truncate on the original table. This approach is much faster.
  3. Breaking a big transaction into a small transactions applies to Insert and Update as well.
  4. Where possible use table partitioning. This makes the maintenance easy. You can drop a partition if you need to delete it.

DEMO PART :

Continue reading “How to Avoid the performance risk of the Delete Statements”

Create Clustered index on all heap tables by on single click V2


images (1)Hello my followers in my last post i created one DMV to can help us for figuring the heap tables on database level then create automatic clustered index on this heap table then after i worked on the DMV i found it worked on database level not on server level and this very hard to execute it on databases one by one (I love to create general DMV for all Server to be saved as Stored procedure under MSDB ) and i found also the DMV take the first column in the table then i create on it cluster index and this from index design and scmaa design not correct because so i updated my DMV to cover two new point very important and they will do the stored proceure more comprehensive and reliable 

update on version number 2:

  1. Select the best column from heap table to create on it the clustered index (i used case when T-SQL to return the data type columns with specific data types i can create on it clustered index.)
  2. DMV now running on server level to cover all database exists on SQL Server instance. ( Converted the query to dynamic query and i looped on each database on the server then i executed this dynamic query on it to print to me one script for the database )

How to execute the Stored procedure #Check_Heap_Tables#

  1. Create Stored Procedure Check_Heap_Tables on MSDB database.
  2. Execute Check_Heap_Tables Stored procedure .
  3. Copy the T-SQL result then execute it on anther session.
  4. Copy the T-SQL for Clustered index create and execute it on new session.

Continue reading “Create Clustered index on all heap tables by on single click V2”