RSS

Improve SQL Server Performance with Compression

Improve SQL Server Performance with Compression

SQL Server data compression is now available in all editions of SQL Server, starting with 2016 SP1.

In this Article, you will not only know how data compression will save space, you’ll also find out how compression can sometimes improve performance as well.

Space Savings vs. Performance

When I first heard about compression back in 2008, my first thought is that it would have a big performance penalty. Compression would save disk space, but it would probably decrease performance as the data was compressed and decompressed. It turns out that compression can improve performance instead. Because compressed data fits in a smaller number of data pages, there are decreased I/O requirements. Since I/O is generally the bottleneck in SQL Server, this can improve performance. Compressed data also has a decreased memory requirement. When querying compressed data, a smaller number of pages will be copied to the buffer pool. The one area that is impacted is CPU. You do need to have some CPU headroom because compression will require some additional CPU resources.

a1

The good thing is that if the workload is reasonably tuned, many SQL Server instances have more CPU resources than they need. One note of caution here. Don’t look to compression as the solution to solving major performance issues. You need to look at physical resources, configuration, indexing, and query tuning. The point I’m trying to make is that decreased I/O and better memory utilization will benefit the workload in many cases.

Availability

Compression has been available in SQL Server for all versions from 2008 to 2016 SQL server, but only for Enterprise Edition. Beginning with SQL Server 2016 SP1, it’s now available in Enterprise, Standard, and Express. I’m excited about this because now compression is within the reach of any organization.

Compression Types

There are two types of compression that you can use to compress a table, index, or even a partition:

  • Row Level:

  • Row level compression works by storing fixed-width data types as variable length data types. Nulls and zeroes do not take any space. For example, in an uncompressed table, an integer column takes 4 bytes per row, even for those rows that have a small number such as 1 or 10, or even null. Once row compression is implemented on the table, each value will be stored with the smallest possible number of bytes. So outside of the metadata, storing a 1 in an integer column will take 1 byte, storing a null or 0 will take 0 bytes.

 a2

  • Starting with SQL Server 2012, Unicode compression is implemented when you use row compression. This applies to Unicode columns in varchar and in char. In an uncompressed table, each character takes up 2 bytes in a Unicode column, even if a small character set is used. In those cases, compressing the table will store the Unicode characters in 1 byte instead.

a3

  • In the technical article, Data Compression: Strategy, Capacity Planning, and Best Practices, Microsoft recommends using row compression on all data, as long as you have 10% extra CPU capacity, and, of course, as long as you achieve some space savings.
  • It costs Low CPU penalty.
  • Page Level:

  • When you implement page level compression, the rows are automatically row-compressed first.
  • Page level compression also removes repeated data within a page by two mechanisms, prefix and dictionary compression.
  • Page level compression can compress a table to a smaller size than row compression, but it is recommended for tables that are mostly inserted, but not updated that often.
  • It does have a higher CPU penalty.

Neither of these compression types will work on row-overflow data. This is data from a row that exceeds 8K. For example, you can create a table that has two varchar 8000 columns. If a row exceeds 8060 bytes, then SQL Server will move one or more of those columns to another page so that the row fits. One way to get around this issue is the new COMPRESS function.

COMPRESS Function

  • You can use this function to compress individual values, which includes row-overflow data.
  • It uses a GZIP algorithm to compress the values. There is a downside to this, however. The COMPRESS function must be applied each time a value is inserted or updated, and a DECOMPRESS function must then be used to read the value. This means that there would be changes to the application or stored procedure.

Performance Metrics

In this demo, we’ll do some performance comparisons. We’ll take a close look at I/O and memory impact. We’ll see how compression affects both reads and writes to the data.

  • Select comparison

I created two tables; one with page compression called “bigTransactionHistoryPAGE” and one with row compression called “bigTransactionHistoryROW”. I also created a table with no compression called “bigTransactionHistory” that I’ll use during this demonstration as well. And finally, I populated the two compressed tables with around 31 million records.

Let’s take a look at the number of pages in each table or index:

SELECT OBJECT_NAME(i.[object_id]) AS TableName,

       i.name AS IndexName, SUM(s.used_page_count) IndexPages,

       FORMAT(1 - SUM(s.used_page_count) * 1.0/CASE WHEN i.name LIKE 'IX%' THEN 131819 ELSE 143645 END, 'P') AS PercentSaved

FROM sys.dm_db_partition_stats  AS s

JOIN sys.indexes AS i

ON s.[object_id] = i.[object_id] AND s.index_id = i.index_id

WHERE OBJECT_NAME(i.[object_id]) LIKE 'bigTransactionHistory%'

       AND OBJECT_NAME(i.[object_id]) <> 'bigTransactionHistoryTEST'

GROUP BY i.[object_id],i.name

ORDER BY IndexPages Desc;

a4

The uncompressed table from is the largest. The non-clustered index compressed with row compression saved about 30% of the space, and the clustered index saved about 37%. The page compressed table is really interesting. The clustered index is much smaller than the original table, while the non-clustered index is about 39% smaller.

I’m going to turn on STATISTICS IO to compare the number of pages touched when I query each table:

SET STATISTICS IO ON;

GO

SELECT SUM(Quantity) AS ItemsPurchased

FROM bigTransactionHistory;




SELECT SUM(Quantity) AS ItemsPurchased

FROM bigTransactionHistoryROW;




SELECT SUM(Quantity) AS ItemsPurchased

FROM bigTransactionHistoryPAGE;

a5

You see here that less I/O is needed for the compressed table. That’s actually obvious since the clustered and non-clustered indexes are smaller when compressed.

The next query shows how many pages are in the buffer for each index:

SELECT COUNT(*)AS cached_pages_count

,name ,index_id

FROM sys.dm_os_buffer_descriptors AS bd

INNER JOIN

(

SELECT object_name(object_id) AS name

,index_id ,allocation_unit_id

FROM sys.allocation_units AS au

INNER JOIN sys.partitions AS p

ON au.container_id = p.hobt_id

AND (au.type = 1 OR au.type = 3)

UNION ALL

SELECT object_name(object_id) AS name

,index_id, allocation_unit_id

FROM sys.allocation_units AS au

INNER JOIN sys.partitions AS p

ON au.container_id = p.partition_id

AND au.type = 2

) AS obj

ON bd.allocation_unit_id = obj.allocation_unit_id

WHERE database_id = DB_ID()  AND name LIKE 'bigTransactionHistory%'

GROUP BY name, index_id

ORDER BY cached_pages_count DESC;

a6

Again, when the index resides on fewer pages, the data takes less space and memory. Notice that the clustered index was used for the page-compressed table. We saw that the page-compressed clustered index is much smaller than the non-clustered index.

On this Azure VM, I have seen inconsistent results in the time to run the queries. For example, if I use DBCC DROPCLEANBUFFERS, or even restart SQL Server to ensure the data must be loaded from disk, one of these queries could run anywhere from 1 second to even a minute. I suspect that the VM is sharing resources with other VMs, and that is causing my discrepancies. I decided to run the queries each in a loop to better see the difference. Inside the loop, I use DBCC DROPCLEANBUFFERS to make sure that the data was loaded from disk, and not cached data.

DECLARE @Count INT = 0;

WHILE @Count < 100 BEGIN

       DBCC DROPCLEANBUFFERS;    

       SELECT SUM(Quantity) AS ItemsPurchased

       FROM bigTransactionHistory;

       SET @Count += 1;

END;

The loop against the uncompressed table took 3 minutes and 31 seconds. The row-compressed table took 3 minutes and 4 seconds and the page compressed table took 3 minutes and 18 seconds. So in this case, the row-compressed query performed the best, with the page-compressed query next.

  • Insert comparison

Let’s take a look at inserting data. In this script, I populated an uncompressed table and our two compressed tables with a loop of 1000 inserts of 1000 rows each. To make things fair, I ran DROPCLEANBUFFERS before each loop.

DBCC DROPCLEANBUFFERS;

GO

--2:40

DECLARE @Count INT = 0;

WHILE @Count < 1000 BEGIN

INSERT INTO dbo.bigTransactionHistoryTEST

SELECT *

FROM bigTransactionHistory

WHERE TransactionID BETWEEN @Count * 1000 + 1 AND (@Count + 1) * 1000;

SET @Count = @Count + 1;

END;

The uncompressed inserts took 2 minutes and 40 seconds. The row-compressed inserts completed a bit faster at 2 minutes and 23 seconds, and the page-compressed inserts completed in 2 minutes and 14 seconds. The page-compressed inserts were actually fastest, and I suspect that it might be due to less I/O.

  • Update comparison

Let’s take a look at updates.

DBCC DROPCLEANBUFFERS;

GO

–1:48

DECLARE @Count INT = 0;

WHILE @Count < 1000 BEGIN

UPDATE dbo.bigTransactionHistoryTEST

SET Quantity = Quantity + 1

WHERE TransactionID BETWEEN @Count * 1000 + 1 AND (@Count + 1) * 1000;

SET @Count = @Count + 1;

END;

Again, I ran loops. In this case, it’s 1000 updates of 1000 rows Updating the uncompressed table took 1 minute and 48 seconds. Updating the row-compressed table took 1 minute and 36 seconds. The page-compressed table was much slower at 2 minutes and 21 seconds. During the slides, I mentioned that Microsoft recommends page compression for workloads that have few updates, and you can see that the updates are slower for page compression.

To summarize, I saw decent space savings, especially with page compression. The compressed tables performed better for both selects and inserts. Updates, however, performed noticeably worse with page compression.

a7

Summary:

This Article is an overview of data compression. Compression is now available in all editions of SQL Server, starting with 2016 SP1. You can implement row and page level compression, and also use the new COMPRESS function. Compression can improve performance because of decreased I/O and memory pressure. I hope this article has been informative for you.

Useful Links:

 

Tags: , , , ,

Repair Corrupt SQL Database with Advanced Recovery Option

Corruption of your SQL Server database can lead to the situation where its contents are destroyed in case the issue is left unattended when the recovery was still possible to be executed. To repair corrupt SQL database you must always ensure that you plan your disaster recovery solution Read the rest of this entry »

 
Leave a comment

Posted by on October 4, 2017 in problems, SQL Server 2012

 

Methods of protection and recommendations from threats and dangers of a Ransomware virus

I would like to inform you that there are warnings and risks on our servers, Computers based on the statements published today in more than one website and published through (Microsoft, NCSC,..ETC) the emergence of Ransomware virus.

This virus attacked many of PC,s, and servers over all the world that’s why We need necessarily to do the below things ASAP

 

WhatsApp Image 2017-05-13 at 3.18.07 PM

Read the rest of this entry »

 
Leave a comment

Posted by on May 14, 2017 in General topics

 

Hybrid Cloud and Hekaton Features in SQL Server 2014

Introduction

Microsoft SQL Server 2014 is considered to be the first version that supports Hybrid Cloud by adding a lot of exciting new features.

In this article, I will cover some of the top new features in these main points including Hekaton and Hybrid Cloud enhancements:

Hekaton

Hekaton is the code name of the new feature of In-Memory OLTP. It is a new database engine, fully integrated with SQL server and designed to enhance memory resident data and OLTP workloads. In simple words, with Hekaton we can store the entire table in memory.

Let’s list some of the benefits of this new feature:

  • Memory-Optimized-Tables can be accessed using T-SQL like Disk-Based-Tables.
  • Both of Memory-Optimized-Tables and Disk-Based-Tables can reference in the same query, and also we can update both types of tables by one transaction.
  • Stored procedures that only reference Memory-Optimized-Tables can natively compile into machine code which results in improving performance.
  • This new engine designed for a high level of session concurrency for OLTP transactions.

There are still some limitations for Memory-Optimized-Tables in SQL server 2014 which are:

  • ALTER TABLE statement, SP_RENAME stored procedure, ALTER BUCKET_COUNT statement, and add\remove index outside statement of CREATE TABLE, all of these not supported by In-Memory table
  • Some constraints not supported like (CHECK, FOREIGN KEY, UNIQUE)
  • RANGE INDEXES and TRIGGERS not supported by In-Memory table
  • REPLICATION, MIRRORING, and LINKED SERVERS are incompatible with Memory-Optimized-Tables.

To know more information, you can check SQL Server Support for In-Memory OLTP.

Memory-Optimized-Tables are appropriate for the following scenarios:

  • A table has a high insertion rate of data from multiple concurrent sources
  • A table cannot meet scale-up requirements for high performance of reading operations especially with periodic batch inserts and updates
  • Intensive logic processing inside a stored procedure
  • A database solution cannot achieve low latency business transaction

Let’s now go through the steps to create a Memory-Optimized-Table

 

For more information check the source article from HERE

 
Leave a comment

Posted by on March 1, 2017 in General topics

 

How to analyze Storage Subsystem Performance in SQL Server

introduction

To improve performance, it is common for DBAs to search in each aspect except analyzing storage subsystem performance even though in many times, issues are, in fact, caused by poor storage subsystem performance. Therefore, I want to give you some tools and recommendation that you can use it to prevent your storage subsystem from being a performance issue for you.

In this article, I will cover how to measure and analyze your storage subsystem performance and how to test your storage subsystem including

  1. Main metrics for storage performance
  2. Operating System Tools to measure storage performance
  3. SQL Server Tools to measure storage performance
  4. Using SQL Server to test storage performance

Main metrics for storage performance:

In this section I will introduce the three main metrics for the most storage performance issues as follows:

  1. Latency
    Each IO request will take some time to complete this latency is measured in milliseconds (ms) and should be as low as possible
  2. IOPS
    IOPS means IO operations per second, which means the amount of reading or write operations that could be done in one second. A certain amount of IO operations will also give a certain throughput of Megabytes each second, so these two are related
  3. Throughputs
    The most common value from a disk manufacturer is how much throughput a certain disk can deliver. This number usually expressed in Megabytes / Second (MB/s), and it is simple to believe that this would be the most important factor

For More information please check the source article from HERE

 
Leave a comment

Posted by on March 1, 2017 in General topics

 

Top 5 new features in SQL Server 2012 for developers

Introduction:

Microsoft SQL Server 2012 introduces many features that help database administrators, database developers, and BI developers.

In this article, I will cover some of the new features for database developers in these main points:

  • Database Engine Improvements
  • Improvements to SQL Server Management Studio Debugging
  • Changes to the Scope of Objects
  • Conclusion
  • References

Database Engine Improvements:

  1. File Tables:When we open up SQL Server Management Studio, one of the first changes we notice for SQL Server 2012 is the addition of a new type of table called a File Table.

    File table allows us to make a connection between windows share and a database table such that any file that appears in the share will become a row item in the table.

    It allows us to run queries that tell us how many files we have in that shared location, what type of files, what size the files are, etc….

    Setting this up is a multiple step process:

    • Enable file stream: is The first step we have to do at the instance level.We will do that with the configuration manager tool, open properties of the instance we are interested in, and there is a tab for file stream, in there we should click on all of the checkboxes(enabling all features) as below

word-image-51

 

To complete the article check it HERE

 
Leave a comment

Posted by on February 18, 2017 in New Feature

 

Tags: , , , ,

SQL Server performance – measure Disk Response Time

Introduction

As DBAs, we all get to the point where we are asked to setup a new server for a particular environment. Setting up a new server is not a big thing but giving the answer to the question that “how well it will work” might be tricky.

There are tons of items which we can set up to measure how well the newly installed server will receive a response, but here I will discuss one of the most valuable resources of the server “Disk.” Most often the disk is not measured correctly, or I have seen environments where the disk response time has never been measured. I will discuss here a tool from Microsoft which is very handy and can solve your problem very quickly. The diskspd.exe!

It’s the superseding version of SQLIO which was previously used to measure IO response time for the disk. The source code of diskspd.exe is hosted on GitHub. You can download this free utility from Microsoft’s website using this link.

After you download the utility, you will get a zip file. Just unzip the file, and it will give you the folders and files as shown in the below screenshot. You will-will need the eye of diskspd inside the folder “amd64fre” if you have a SQL Server 64-bit version (most of us will be having this).

To complete this article please check it here

 

word-image-29

 
Leave a comment

Posted by on February 16, 2017 in General topics, SQL Server 2014

 

Tags: , , , , ,