RSS

Tag Archives: Performance

Improve SQL Server Performance with Compression

Improve SQL Server Performance with Compression

SQL Server data compression is now available in all editions of SQL Server, starting with 2016 SP1.

In this Article, you will not only know how data compression will save space, you’ll also find out how compression can sometimes improve performance as well.

Space Savings vs. Performance

When I first heard about compression back in 2008, my first thought is that it would have a big performance penalty. Compression would save disk space, but it would probably decrease performance as the data was compressed and decompressed. It turns out that compression can improve performance instead. Because compressed data fits in a smaller number of data pages, there are decreased I/O requirements. Since I/O is generally the bottleneck in SQL Server, this can improve performance. Compressed data also has a decreased memory requirement. When querying compressed data, a smaller number of pages will be copied to the buffer pool. The one area that is impacted is CPU. You do need to have some CPU headroom because compression will require some additional CPU resources.

a1

The good thing is that if the workload is reasonably tuned, many SQL Server instances have more CPU resources than they need. One note of caution here. Don’t look to compression as the solution to solving major performance issues. You need to look at physical resources, configuration, indexing, and query tuning. The point I’m trying to make is that decreased I/O and better memory utilization will benefit the workload in many cases.

Availability

Compression has been available in SQL Server for all versions from 2008 to 2016 SQL server, but only for Enterprise Edition. Beginning with SQL Server 2016 SP1, it’s now available in Enterprise, Standard, and Express. I’m excited about this because now compression is within the reach of any organization.

Compression Types

There are two types of compression that you can use to compress a table, index, or even a partition:

  • Row Level:

  • Row level compression works by storing fixed-width data types as variable length data types. Nulls and zeroes do not take any space. For example, in an uncompressed table, an integer column takes 4 bytes per row, even for those rows that have a small number such as 1 or 10, or even null. Once row compression is implemented on the table, each value will be stored with the smallest possible number of bytes. So outside of the metadata, storing a 1 in an integer column will take 1 byte, storing a null or 0 will take 0 bytes.

 a2

  • Starting with SQL Server 2012, Unicode compression is implemented when you use row compression. This applies to Unicode columns in varchar and in char. In an uncompressed table, each character takes up 2 bytes in a Unicode column, even if a small character set is used. In those cases, compressing the table will store the Unicode characters in 1 byte instead.

a3

  • In the technical article, Data Compression: Strategy, Capacity Planning, and Best Practices, Microsoft recommends using row compression on all data, as long as you have 10% extra CPU capacity, and, of course, as long as you achieve some space savings.
  • It costs Low CPU penalty.
  • Page Level:

  • When you implement page level compression, the rows are automatically row-compressed first.
  • Page level compression also removes repeated data within a page by two mechanisms, prefix and dictionary compression.
  • Page level compression can compress a table to a smaller size than row compression, but it is recommended for tables that are mostly inserted, but not updated that often.
  • It does have a higher CPU penalty.

Neither of these compression types will work on row-overflow data. This is data from a row that exceeds 8K. For example, you can create a table that has two varchar 8000 columns. If a row exceeds 8060 bytes, then SQL Server will move one or more of those columns to another page so that the row fits. One way to get around this issue is the new COMPRESS function.

COMPRESS Function

  • You can use this function to compress individual values, which includes row-overflow data.
  • It uses a GZIP algorithm to compress the values. There is a downside to this, however. The COMPRESS function must be applied each time a value is inserted or updated, and a DECOMPRESS function must then be used to read the value. This means that there would be changes to the application or stored procedure.

Performance Metrics

In this demo, we’ll do some performance comparisons. We’ll take a close look at I/O and memory impact. We’ll see how compression affects both reads and writes to the data.

  • Select comparison

I created two tables; one with page compression called “bigTransactionHistoryPAGE” and one with row compression called “bigTransactionHistoryROW”. I also created a table with no compression called “bigTransactionHistory” that I’ll use during this demonstration as well. And finally, I populated the two compressed tables with around 31 million records.

Let’s take a look at the number of pages in each table or index:

SELECT OBJECT_NAME(i.[object_id]) AS TableName,

       i.name AS IndexName, SUM(s.used_page_count) IndexPages,

       FORMAT(1 - SUM(s.used_page_count) * 1.0/CASE WHEN i.name LIKE 'IX%' THEN 131819 ELSE 143645 END, 'P') AS PercentSaved

FROM sys.dm_db_partition_stats  AS s

JOIN sys.indexes AS i

ON s.[object_id] = i.[object_id] AND s.index_id = i.index_id

WHERE OBJECT_NAME(i.[object_id]) LIKE 'bigTransactionHistory%'

       AND OBJECT_NAME(i.[object_id]) <> 'bigTransactionHistoryTEST'

GROUP BY i.[object_id],i.name

ORDER BY IndexPages Desc;

a4

The uncompressed table from is the largest. The non-clustered index compressed with row compression saved about 30% of the space, and the clustered index saved about 37%. The page compressed table is really interesting. The clustered index is much smaller than the original table, while the non-clustered index is about 39% smaller.

I’m going to turn on STATISTICS IO to compare the number of pages touched when I query each table:

SET STATISTICS IO ON;

GO

SELECT SUM(Quantity) AS ItemsPurchased

FROM bigTransactionHistory;




SELECT SUM(Quantity) AS ItemsPurchased

FROM bigTransactionHistoryROW;




SELECT SUM(Quantity) AS ItemsPurchased

FROM bigTransactionHistoryPAGE;

a5

You see here that less I/O is needed for the compressed table. That’s actually obvious since the clustered and non-clustered indexes are smaller when compressed.

The next query shows how many pages are in the buffer for each index:

SELECT COUNT(*)AS cached_pages_count

,name ,index_id

FROM sys.dm_os_buffer_descriptors AS bd

INNER JOIN

(

SELECT object_name(object_id) AS name

,index_id ,allocation_unit_id

FROM sys.allocation_units AS au

INNER JOIN sys.partitions AS p

ON au.container_id = p.hobt_id

AND (au.type = 1 OR au.type = 3)

UNION ALL

SELECT object_name(object_id) AS name

,index_id, allocation_unit_id

FROM sys.allocation_units AS au

INNER JOIN sys.partitions AS p

ON au.container_id = p.partition_id

AND au.type = 2

) AS obj

ON bd.allocation_unit_id = obj.allocation_unit_id

WHERE database_id = DB_ID()  AND name LIKE 'bigTransactionHistory%'

GROUP BY name, index_id

ORDER BY cached_pages_count DESC;

a6

Again, when the index resides on fewer pages, the data takes less space and memory. Notice that the clustered index was used for the page-compressed table. We saw that the page-compressed clustered index is much smaller than the non-clustered index.

On this Azure VM, I have seen inconsistent results in the time to run the queries. For example, if I use DBCC DROPCLEANBUFFERS, or even restart SQL Server to ensure the data must be loaded from disk, one of these queries could run anywhere from 1 second to even a minute. I suspect that the VM is sharing resources with other VMs, and that is causing my discrepancies. I decided to run the queries each in a loop to better see the difference. Inside the loop, I use DBCC DROPCLEANBUFFERS to make sure that the data was loaded from disk, and not cached data.

DECLARE @Count INT = 0;

WHILE @Count < 100 BEGIN

       DBCC DROPCLEANBUFFERS;    

       SELECT SUM(Quantity) AS ItemsPurchased

       FROM bigTransactionHistory;

       SET @Count += 1;

END;

The loop against the uncompressed table took 3 minutes and 31 seconds. The row-compressed table took 3 minutes and 4 seconds and the page compressed table took 3 minutes and 18 seconds. So in this case, the row-compressed query performed the best, with the page-compressed query next.

  • Insert comparison

Let’s take a look at inserting data. In this script, I populated an uncompressed table and our two compressed tables with a loop of 1000 inserts of 1000 rows each. To make things fair, I ran DROPCLEANBUFFERS before each loop.

DBCC DROPCLEANBUFFERS;

GO

--2:40

DECLARE @Count INT = 0;

WHILE @Count < 1000 BEGIN

INSERT INTO dbo.bigTransactionHistoryTEST

SELECT *

FROM bigTransactionHistory

WHERE TransactionID BETWEEN @Count * 1000 + 1 AND (@Count + 1) * 1000;

SET @Count = @Count + 1;

END;

The uncompressed inserts took 2 minutes and 40 seconds. The row-compressed inserts completed a bit faster at 2 minutes and 23 seconds, and the page-compressed inserts completed in 2 minutes and 14 seconds. The page-compressed inserts were actually fastest, and I suspect that it might be due to less I/O.

  • Update comparison

Let’s take a look at updates.

DBCC DROPCLEANBUFFERS;

GO

–1:48

DECLARE @Count INT = 0;

WHILE @Count < 1000 BEGIN

UPDATE dbo.bigTransactionHistoryTEST

SET Quantity = Quantity + 1

WHERE TransactionID BETWEEN @Count * 1000 + 1 AND (@Count + 1) * 1000;

SET @Count = @Count + 1;

END;

Again, I ran loops. In this case, it’s 1000 updates of 1000 rows Updating the uncompressed table took 1 minute and 48 seconds. Updating the row-compressed table took 1 minute and 36 seconds. The page-compressed table was much slower at 2 minutes and 21 seconds. During the slides, I mentioned that Microsoft recommends page compression for workloads that have few updates, and you can see that the updates are slower for page compression.

To summarize, I saw decent space savings, especially with page compression. The compressed tables performed better for both selects and inserts. Updates, however, performed noticeably worse with page compression.

a7

Summary:

This Article is an overview of data compression. Compression is now available in all editions of SQL Server, starting with 2016 SP1. You can implement row and page level compression, and also use the new COMPRESS function. Compression can improve performance because of decreased I/O and memory pressure. I hope this article has been informative for you.

Useful Links:

 

Tags: , , , ,

Live Query Statistics in SQL Server 2016

Today we have very good features it will help us in our daily work (Performance troubleshooting and for the debugging of queries) as we know to tune any SQL Server query you should Check first the Execution plan to decide from where the problem from SQL Server 2005 to SQL Server 2014 we have two types of Execution plan:

  • Estimated Execution Plan: Giving you an idea of how SQL Server will most likely perform query execution
  • Actual Execution Plan       : This will show you things that might hint at “out-of-date” statistics etc. But to get this, you must run the query – which can take a long time.

So before if we have case one Query returned Millions of record and we need to check the Query execution plan for it we will go directly for the Estimated Execution plan because actual Execution plan if we need to see it we will wait more time to the query finish and return all the result but Now in SQL Server 2016 we have new Execution plan type ” Live Query Statistics ”  this new features it will he;p us to see the Execution plan when the Query running and the Effect of the Query on the Execution plan step by step from more points:

  • Execution Query Percentage
  • Actual Number of Rows
  • elapsed time
  • operator progress

What I use here in my demo is simple Query return more than 8 Millions of records if I wait for this query to return all this data to see the Actual Execution plan I will wait around 20 Minutes but now in SQL Server 2016 I will See the Execution plan the Actual Execution Plan Live with Live Query Statistics 

SELECT *
FROM sys.all_columns tmp1
Cross JOIN sys.all_columns tmp2

Live Query3

Live Query1 Live Query2

To know More about new feature in SQL Server 2016 Keep following us and to check the previous posts in SQL Server 20116 you can check it from HERE

 
Leave a comment

Posted by on November 8, 2015 in SQL Server 2016

 

Tags: , , , , , , , ,

Next SQL Gulf event “SQL Gulf #2 “ at Microsoft Dubai

logo

SQL Gulf Community reveals about its upcoming SQL Gulf event “SQL Gulf #2” at Microsoft Dubai on 5/9/2015, It is the first SQL Server event ever in the Middle East to talk about SQL Server 2016 ..!
SQL Gulf #1 was at Saudi Arabia, Riyadh on 30/8/2014 , indeed it was extremely successful one that is why we are so excited to spread the word everywhere in the Gulf through many SQL Gulf events at all Gulf cities ,interestingly said that the top notch SQL Server experts and most ever popular speakers are coming from US, UK ..etc are coming to speak to you  at SQL Gulf #2 like Kevin Kline from US and Satya Shyam K Jayanty from UK ..etc  and more importantly to know it is the first SQL Server event in the Middle East ever to talk about SQL Server 2016…

Register here to reserve your seat ASAP, seats are limited , here below is the event program:

Session Abstracts:

1-FIRE! A Fullproof Checklist for Tuning and Troubleshooting

Session Abstract: Learning how to detect, diagnose and resolve performance problems in SQL Server is tough.  Often, years are spent learning how to use the tools and techniques that help you detect when a problem is occurring, diagnose the root-cause of the problem, and then resolve the problem.
In this session, attendees will see demonstrations of the tools and techniques which make difficult troubleshooting scenarios much faster and easier, including:
•             XEvents, Profiler/Traces, and PerfMon
•             Using Dynamic Management Views (DMVs)
•             Advanced Diagnostics Using Wait Stats
•             Reading SQL Server execution plan

Every DBA needs to know how to keep their SQL Server in tip-top condition, and you’ll need skills the covered in this session to do it.
Prerequisites: Intermediate database administration and development skills, especially competence with SSMS.
Goal 1: Learn the “sieve” method of troubleshooting and problem solving, and how to make SQL Server alert you when problems arise.
Goal 2: Discover how to use wait stat analysis, as well as correlate performance information from other sources inside of SQL Server including DMVs, performance counters, and Xevent/trace information.
Goal 3: Learn how to use the most important native tools within SQL Server through live demos to successfully conduct troubleshooting and performance tuning.

Summary: Microsoft ships a multitude of tools to help detect, diagnose and resolve problems inside of SQL Server. But which is best to use and when? This session teaches attendees how to tackle the troubleshooting process to achieve repeated, optimal results.

2-What do I need to know about Data Platform Upgrade best practices & techniques

Data Platform Upgrade topic has been a popular session that I’ve presented in major conferences like Microsoft Tech-Ed (North America, Europe & India), SQLPASS, SQLSaturdays and SQLbits since the year 2008.

In this session, we will overview in depth end-to-end upgrade process that covers the essential phases, steps and issues involved in upgrading SQL Server 2000, 2005, 2008 R2 & SQL Server 2012/2014 (with a good overview on 2016 too) by using best practices and available resources.
We will cover the complete upgrade cycle, including the preparation tasks, upgrade tasks, and post-upgrade tasks. Real-world examples from my Consulting experience expanding on why & how such a solution will work in critical situations.

3-Performance Dreams started at SQL Server 2014 and come true now at SQL Server 2016

“Performance Dreams started at SQL Server 2014 and come true now at SQL Server 2016”
, it is just like this because you Microsoft launched live SQL Server 2014 with unbelievable performance reads and substantial improvement that you cannot give up them , come in here to my session and you will know more about many new features and rich powers of SQL Server 2014 regarding performance particularly like  Microsoft project “Hekaton” for In-memory built in for OLTP , CCI( columnstore index) ,Resource Governor for IO consumption ,  lock priority management  and also Single partition online index rebuild technologies ,you will get much hands-on experience  for all definitions , architecture design , values and benefits ,caveats  and recommendations related to each one of them so that you can drive a conscious decision for upgrading your DBs to SQL Server 2014 but keep in mind that are some limitations for those features that should be considered largely before  production deployments …If not ,don’t be upset as it Is still not the end of way coz Microsoft had really rolled out SQL Server 2016 which could address successfully many of those limitations and indeed SQL Server 2014 performance dreams come true at SQL Server 2016, I will speak there simply to help those who didn’t breath SQL Server for the last little while

4-SQL Server Internals and Architecture

Session Abstract: Let’s face it.  You can effectively do many IT jobs related to SQL Server without knowing the internals of how SQL Server works.  Many great developers, DBAs, and designers get their day-to-day work completed on time and with reasonable quality while never really knowing what’s happening behind the scenes.  But if you want to take your skills to the next level, it’s critical to know SQL Server’s internal processes and architecture.  This session will answer questions like:

–       What are the various areas of memory inside of SQL Server?
–       How are queries handled behind the scenes?
–       What does SQL Server do with procedural code, like functions, procedures, and triggers?
–       What happens during checkpoints?  Lazywrites?
–       How are IOs handled with regards to transaction logs and database?
–       What happens when transaction logs and databases grow or shrinks?

This fast paced session will take you through many aspects of the internal operations of SQL Server and, for those topics we don’t cover, will point you to resources where you can get more information.  So strap on your silly, as we cover all these topics and more at speed with tongue planted firmly in cheek!

Prerequisites: Basic understanding of SQL Server operations and activities – such as transactions, queries, and preventative maintenance tasks like backup and recovery.

Goal 1: Learn about the major components within the SQL Server architecture, starting with the relational engine and storage engine, working down to greater and greater detail.
Goal 2: See what happens inside the SQL Server query optimizer and how the query optimizer affects transaction execution time, both read-only and read-write transactions
Goal 3: Review the major operational processes inside of SQL Server that affect memory management and IO activity.

5-Power BI deployment best practices to deliver data analytics to the business

How to deploy Power BI, how to implement configuration parameters and package BI features as a part of Office 365 roll out in your organisation.

Having said that, cloud computing is another aspect of this technology made is possible to get data within few clicks and ticks to the end-user. Let us review how to manage & connect on-premise data to cloud capabilities that can offer full advantage of data catalogue capabilities by keeping data secure as per Information Governance standards. Not just with nuts and bolts, performance is another aspect that every Admin is keeping up, let us look into few settings on how to maximize performance to optimize access to data as required.

Gain understanding and insight into number of tools that are available for your Business Intelligence needs.

There will be a showcase of events to demonstrate where to begin and how to proceed in BI world.

 
5 Comments

Posted by on August 12, 2015 in Event

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

SQL Server 2014 Service Pack 1 has released

Congratulation Microsoft and big Congratulation for us (SQL Server Releases blog)

sql-server-2014-logo

One year ago, Microsoft launched SQL Server 2014. Today, April 15, Microsoft are pleased to announce the release of SQL Server 2014 Service Pack 1 (SP1). The Service Pack will be available for download on the Microsoft Download Center.

SQL Server 2014 SP1 contains fixes provided in SQL Server 2014 CU 1 up to and including CU 5, as well as a rollup of fixes previously shipped in SQL Server 2012 SP2. For highlights of the release, please read the Knowledge Base Article for Microsoft SQL Server 2014 SP1.

As part of our continued commitment to software excellence for our customers, this upgrade is available to all customers with existing SQL Server 2014 deployments via the download links below. SQL Server 2014 with SP1 will be available in additional venues including the Volume Licensing Center and via Microsoft Update starting May 1, 2015.

Microsoft® SQL Server® 2014 SP1
Microsoft® SQL Server® 2014 SP1 Express
Microsoft® SQL Server® 2014 SP1 Feature Pack

To check all my posts you can direct to my personal website or you can download this Excel sheet My Technical POSTS

View all my tips , LinkedIn Website Slideshare ,Youtube Channel.

 
4 Comments

Posted by on April 16, 2015 in SQL Server 2014

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

How to Avoid the performance risk of the Delete Statements

Introduction

downloadHello everybody in my last post i explained How we can create Clustered index on all heap tables by one single click for video check this link today i will write and speak in new something How we can avoid the performance risk of the Delete Statement, YES delete statement can do big trouble on my server if i have one table with huge volume of data and more transaction hitting this table this meaning this critical table you should be Sensitive  with this critical  databases or this critical objects , because if you need to delete huge data from this table Based on certain criteria AS Example you need to delete 1,000,000 of record.

Problem

At this time when you need to delete 1,000,000 with some where condition and you are deleted from one of the critical table on your server don’t take the wrong way to write one Delete statement direct it will cost more Sync_network_IO and more CXPACKET also probably it can make Schema lock or Query lock .

Solution :

images

Forget the direct delete statement no think here ! you should do the delete statement as patching:

  1. grabbing any unique ID from Target table and insert it into Temp table.
  2. looping on the temp table to return each time the first 1000 record as Example.
  3. using Merge technology to delete the Data exists on target table where the ID equal the ID in source table.
  4. Update the counter of the looping
  5. commit the transaction if it success rollback transaction  if  it fail

by this way the statement of the delete will run smoothly without any bad affect on the SQL Server cluster performance. don’t think in the time you should think in the impact no problem for Query take time without impact because it is better than fast query Executed in 3 SEC but it  fire the CPU or the IO of the server when it run.

Things to consider while working with big tables.

  1. Use Truncate table, if you need to delete all
  2. If you are deleting records more than 70% of data, I would create a temp table, copy only the records you need to this table and run truncate on the original table. This approach is much faster.
  3. Breaking a big transaction into a small transactions applies to Insert and Update as well.
  4. Where possible use table partitioning. This makes the maintenance easy. You can drop a partition if you need to delete it.

DEMO PART :

Read the rest of this entry »

 
Leave a comment

Posted by on March 13, 2015 in General topics

 

Tags: , , , ,

Create Clustered index on all heap tables by on single click V2

images (1)Hello my followers in my last post i created one DMV to can help us for figuring the heap tables on database level then create automatic clustered index on this heap table then after i worked on the DMV i found it worked on database level not on server level and this very hard to execute it on databases one by one (I love to create general DMV for all Server to be saved as Stored procedure under MSDB ) and i found also the DMV take the first column in the table then i create on it cluster index and this from index design and scmaa design not correct because so i updated my DMV to cover two new point very important and they will do the stored proceure more comprehensive and reliable 

update on version number 2:

  1. Select the best column from heap table to create on it the clustered index (i used case when T-SQL to return the data type columns with specific data types i can create on it clustered index.)
  2. DMV now running on server level to cover all database exists on SQL Server instance. ( Converted the query to dynamic query and i looped on each database on the server then i executed this dynamic query on it to print to me one script for the database )

How to execute the Stored procedure #Check_Heap_Tables#

  1. Create Stored Procedure Check_Heap_Tables on MSDB database.
  2. Execute Check_Heap_Tables Stored procedure .
  3. Copy the T-SQL result then execute it on anther session.
  4. Copy the T-SQL for Clustered index create and execute it on new session.

Read the rest of this entry »

 
Leave a comment

Posted by on March 11, 2015 in General topics

 

Tags: , , , ,

SQL Server Cumulative update package #6 is available now for SQL Server 2014

downloadHello Guys SQL Server 2014 is the latest version from Microsoft SQL Server and it is coming with awoosem enhancement and features and we are still waiting SP 1 but until now still not released only we are updated our enviroment by the latest comulative update package to fix some issues in SQL Server 2014

Now Microsoft announced about SQL Server Cumulative update package 6 for more information about the Hotfixes that are included in this cumulative update package check this link.including nearly 400 fixes and enhancements

and for more information about the Latest Builds of SQL Server 2014 check this post for sqlsentry Community

  • KB Article: KB #3031047
  • 64 fixes total (55 fixes listed publicly at time of publication)
  • Build number is 12.0.2480
  • Relevant for @@VERSION 12.0.2000 through 12.0.2479

Follow the author:

View all my tips , LinkedIn Website Slideshare 

 
Leave a comment

Posted by on February 22, 2015 in General topics

 

Tags: , , , , ,