RSS

Query Store for Solving Query Performance Regressions

Query Store for Solving Query Performance Regressions

Query performance is a very important area of SQL server. We always have badly performance queries around.

Query store is the newest tool for tracking and resolving performance problems in SQL server.

In this article, we are going to have a look at some practical uses of SQL server Query store.

What is Query Store?

The query store has been described by Microsoft as a ‘flight data recorder’ for SQL server queries. It tracks the queries run against the database, recording the details of the queries and their plans and their runtime characteristics. Query store is per database feature and runs automatically. Once turned on, nothing further needs to be done to get it to track data. It runs in the background collecting data and storing it for later analysis.

Query store is available in SQL Server 2016 and later, and Azure SQLDB v12 and later. It is available in all editions of SQL server, even in Express edition.

How is Query store different from other tracking options?

We have had query performance tracking for some time though in the form of dynamic management views. Mostly, sys.dm_exec_query_stats and sys.dm_exec_query_plan and tracing tools like SQL server profiler and extended events.

So, what makes Query Store different? Let me start answering that by describing a scenario that I encountered a couple of years ago.

A particular critical system was suddenly performing badly. It had been fine the previous week and there have been no extended events sessions or profiler traces running historically. The admin had restarted the server when the performance problem started, just to make sure it was not something related to a pending reboot.

As such, there was no historical performance data at all and solving the problem of what happened, why the query performance is different this week was extremely difficult.

1

Query store would have solved the problem of the lack of historical data. Once turned on, query store tracks query performance automatically. The data collected is persisted into the user database and hence -unlike with the DMVs- it is not lost on a server restart.

Since the query store data is persisted into the user database, it is included in the backups of that database as well. This makes it much easier to do a performance analysis in somewhere other than the production server now.

2

What exactly is a query performance regression?

A dictionary does not help much here. The Oxford dictionary defines regression as returning to an earlier state which is definitely not relevant here.

A performance regression occurs when a query has degraded in performance over time. This degradation may be sudden or it may be gradual. It is probably more common for regression to be sudden. The degradation may be permanent or it may at a later point returned to the previously accepted behavior.

What causes a regression?

A common cause of a query performance regression is a plan change. Let’s briefly talk about the query optimization on the plan caching process to see why.

TSQL is a declarative language, the query written expresses the desired results, not the process of getting to those results. It is up to SQL server engine to figure out how to get those results and the portion of the engine that figures that out is the query optimizer which takes the query and outputs a query plan. A theoretical operation that the query process that can then execute to obtain the desired results. For anything other than a trivial query there are multiple different plan shapes that can produce the same results but differ in the internal details and differ in how long it will take to execute that plan. In theory, the optimizer will always take a query and produce a good plan –if not the fastest possible plan- is fast enough. However, that is not always the case and it is also perfectly possible for a query to have a plan that is fast for some parameter values and really slow for other parameter values. Then there is plan caching which is another layer of complexity. Optimization is an expensive process so SQL server caches the execution plans. When the query executes again it can fetch the plan from the cache and execute it without the need for the cost of the optimization process.

3

There are many minor causes the plan changes like:

  • Bad parameter sniffing.
  • Out of date statistics.
  • Bad query patterns.
  • Overly-complicated queries.

These all tend to cause temporary performance regression.

Sometimes the regression can be caused by data growth which will be persistent.

Also, code changes or schema changes can cause a performance regression.

Tracking and diagnosing query performance regressions with the query store:

First of all, we should enable query store option using the following statement:

ALTER DATABASE [SQLSHACK_Demo] SET QUERY_STORE = ON;

Then whenever you encounter any query performance problems with your application, simply you can open your database in SQL server management studio and expand out the query store folder and then open the regressed query report.

4

You will see the report with the default configurations which indicates total for the duration but this is not ideal. What we want is to check for regression in CPU time because that eliminates cases of blocking and then indicates the appropriate intervals that you want to investigate in:

5

You can change these configurations of the data viewed if you clicked on configure button on the top right of the report to get the following page:

6

Now, we can see how it behaved over time and we can see that this query has two plans associated with it.

And if we hovered over the bars of the paragraph in the top left corner we can see the query behavior in the recent and historic intervals and we can see how they differ.

7

We can also see how many plans the query has on the top right paragraph if we clicked on the query bar and if we clicked on any of the plans shown we can see its graphical display down on the window.

Fortunately, you can select both plans and click on compare plans button to compare them.

8

In our case here, we can conclude that we definitely do have a case of a bad parameter sniffing. We have got two plans. One is appropriate for all executions and one that was generated by a ‘’NULL’’ parameter value and is not suitable for the majority of executions for this query. I know that we can fix this later.

But, what if we need to fix this now. Here comes query store to show the easiest way to do this by choosing the fastest plan and click force plan button.

From that point onwards, the query will be executed with the forced plan no matter what parameter values the query is compiled with. So the application now is performing well.

Now we want to do further analysis to identify why this happened and how to prevent it in a long-term without resulting to plan forcing. This is kind of investigation we do not really want to do on a production server. And here also query store introduces the easiest way to do that by just backing up and restoring the production database to our test environment and then have a look at our query store regressed queries report again.

Summary:

Query performance regression can face any DBA every day so you had to know what caused it and how to track that regression over time. This article was to show how much easier query store makes it. I hope this article has been informative for you.

Useful Links:

 

Tags: ,

Improve SQL Server Performance with Compression

Improve SQL Server Performance with Compression

SQL Server data compression is now available in all editions of SQL Server, starting with 2016 SP1.

In this Article, you will not only know how data compression will save space, you’ll also find out how compression can sometimes improve performance as well.

Space Savings vs. Performance

When I first heard about compression back in 2008, my first thought is that it would have a big performance penalty. Compression would save disk space, but it would probably decrease performance as the data was compressed and decompressed. It turns out that compression can improve performance instead. Because compressed data fits in a smaller number of data pages, there are decreased I/O requirements. Since I/O is generally the bottleneck in SQL Server, this can improve performance. Compressed data also has a decreased memory requirement. When querying compressed data, a smaller number of pages will be copied to the buffer pool. The one area that is impacted is CPU. You do need to have some CPU headroom because compression will require some additional CPU resources.

a1

The good thing is that if the workload is reasonably tuned, many SQL Server instances have more CPU resources than they need. One note of caution here. Don’t look to compression as the solution to solving major performance issues. You need to look at physical resources, configuration, indexing, and query tuning. The point I’m trying to make is that decreased I/O and better memory utilization will benefit the workload in many cases.

Availability

Compression has been available in SQL Server for all versions from 2008 to 2016 SQL server, but only for Enterprise Edition. Beginning with SQL Server 2016 SP1, it’s now available in Enterprise, Standard, and Express. I’m excited about this because now compression is within the reach of any organization.

Compression Types

There are two types of compression that you can use to compress a table, index, or even a partition:

  • Row Level:

  • Row level compression works by storing fixed-width data types as variable length data types. Nulls and zeroes do not take any space. For example, in an uncompressed table, an integer column takes 4 bytes per row, even for those rows that have a small number such as 1 or 10, or even null. Once row compression is implemented on the table, each value will be stored with the smallest possible number of bytes. So outside of the metadata, storing a 1 in an integer column will take 1 byte, storing a null or 0 will take 0 bytes.

 a2

  • Starting with SQL Server 2012, Unicode compression is implemented when you use row compression. This applies to Unicode columns in varchar and in char. In an uncompressed table, each character takes up 2 bytes in a Unicode column, even if a small character set is used. In those cases, compressing the table will store the Unicode characters in 1 byte instead.

a3

  • In the technical article, Data Compression: Strategy, Capacity Planning, and Best Practices, Microsoft recommends using row compression on all data, as long as you have 10% extra CPU capacity, and, of course, as long as you achieve some space savings.
  • It costs Low CPU penalty.
  • Page Level:

  • When you implement page level compression, the rows are automatically row-compressed first.
  • Page level compression also removes repeated data within a page by two mechanisms, prefix and dictionary compression.
  • Page level compression can compress a table to a smaller size than row compression, but it is recommended for tables that are mostly inserted, but not updated that often.
  • It does have a higher CPU penalty.

Neither of these compression types will work on row-overflow data. This is data from a row that exceeds 8K. For example, you can create a table that has two varchar 8000 columns. If a row exceeds 8060 bytes, then SQL Server will move one or more of those columns to another page so that the row fits. One way to get around this issue is the new COMPRESS function.

COMPRESS Function

  • You can use this function to compress individual values, which includes row-overflow data.
  • It uses a GZIP algorithm to compress the values. There is a downside to this, however. The COMPRESS function must be applied each time a value is inserted or updated, and a DECOMPRESS function must then be used to read the value. This means that there would be changes to the application or stored procedure.

Performance Metrics

In this demo, we’ll do some performance comparisons. We’ll take a close look at I/O and memory impact. We’ll see how compression affects both reads and writes to the data.

  • Select comparison

I created two tables; one with page compression called “bigTransactionHistoryPAGE” and one with row compression called “bigTransactionHistoryROW”. I also created a table with no compression called “bigTransactionHistory” that I’ll use during this demonstration as well. And finally, I populated the two compressed tables with around 31 million records.

Let’s take a look at the number of pages in each table or index:

SELECT OBJECT_NAME(i.[object_id]) AS TableName,

       i.name AS IndexName, SUM(s.used_page_count) IndexPages,

       FORMAT(1 - SUM(s.used_page_count) * 1.0/CASE WHEN i.name LIKE 'IX%' THEN 131819 ELSE 143645 END, 'P') AS PercentSaved

FROM sys.dm_db_partition_stats  AS s

JOIN sys.indexes AS i

ON s.[object_id] = i.[object_id] AND s.index_id = i.index_id

WHERE OBJECT_NAME(i.[object_id]) LIKE 'bigTransactionHistory%'

       AND OBJECT_NAME(i.[object_id]) <> 'bigTransactionHistoryTEST'

GROUP BY i.[object_id],i.name

ORDER BY IndexPages Desc;

a4

The uncompressed table from is the largest. The non-clustered index compressed with row compression saved about 30% of the space, and the clustered index saved about 37%. The page compressed table is really interesting. The clustered index is much smaller than the original table, while the non-clustered index is about 39% smaller.

I’m going to turn on STATISTICS IO to compare the number of pages touched when I query each table:

SET STATISTICS IO ON;

GO

SELECT SUM(Quantity) AS ItemsPurchased

FROM bigTransactionHistory;




SELECT SUM(Quantity) AS ItemsPurchased

FROM bigTransactionHistoryROW;




SELECT SUM(Quantity) AS ItemsPurchased

FROM bigTransactionHistoryPAGE;

a5

You see here that less I/O is needed for the compressed table. That’s actually obvious since the clustered and non-clustered indexes are smaller when compressed.

The next query shows how many pages are in the buffer for each index:

SELECT COUNT(*)AS cached_pages_count

,name ,index_id

FROM sys.dm_os_buffer_descriptors AS bd

INNER JOIN

(

SELECT object_name(object_id) AS name

,index_id ,allocation_unit_id

FROM sys.allocation_units AS au

INNER JOIN sys.partitions AS p

ON au.container_id = p.hobt_id

AND (au.type = 1 OR au.type = 3)

UNION ALL

SELECT object_name(object_id) AS name

,index_id, allocation_unit_id

FROM sys.allocation_units AS au

INNER JOIN sys.partitions AS p

ON au.container_id = p.partition_id

AND au.type = 2

) AS obj

ON bd.allocation_unit_id = obj.allocation_unit_id

WHERE database_id = DB_ID()  AND name LIKE 'bigTransactionHistory%'

GROUP BY name, index_id

ORDER BY cached_pages_count DESC;

a6

Again, when the index resides on fewer pages, the data takes less space and memory. Notice that the clustered index was used for the page-compressed table. We saw that the page-compressed clustered index is much smaller than the non-clustered index.

On this Azure VM, I have seen inconsistent results in the time to run the queries. For example, if I use DBCC DROPCLEANBUFFERS, or even restart SQL Server to ensure the data must be loaded from disk, one of these queries could run anywhere from 1 second to even a minute. I suspect that the VM is sharing resources with other VMs, and that is causing my discrepancies. I decided to run the queries each in a loop to better see the difference. Inside the loop, I use DBCC DROPCLEANBUFFERS to make sure that the data was loaded from disk, and not cached data.

DECLARE @Count INT = 0;

WHILE @Count < 100 BEGIN

       DBCC DROPCLEANBUFFERS;    

       SELECT SUM(Quantity) AS ItemsPurchased

       FROM bigTransactionHistory;

       SET @Count += 1;

END;

The loop against the uncompressed table took 3 minutes and 31 seconds. The row-compressed table took 3 minutes and 4 seconds and the page compressed table took 3 minutes and 18 seconds. So in this case, the row-compressed query performed the best, with the page-compressed query next.

  • Insert comparison

Let’s take a look at inserting data. In this script, I populated an uncompressed table and our two compressed tables with a loop of 1000 inserts of 1000 rows each. To make things fair, I ran DROPCLEANBUFFERS before each loop.

DBCC DROPCLEANBUFFERS;

GO

--2:40

DECLARE @Count INT = 0;

WHILE @Count < 1000 BEGIN

INSERT INTO dbo.bigTransactionHistoryTEST

SELECT *

FROM bigTransactionHistory

WHERE TransactionID BETWEEN @Count * 1000 + 1 AND (@Count + 1) * 1000;

SET @Count = @Count + 1;

END;

The uncompressed inserts took 2 minutes and 40 seconds. The row-compressed inserts completed a bit faster at 2 minutes and 23 seconds, and the page-compressed inserts completed in 2 minutes and 14 seconds. The page-compressed inserts were actually fastest, and I suspect that it might be due to less I/O.

  • Update comparison

Let’s take a look at updates.

DBCC DROPCLEANBUFFERS;

GO

–1:48

DECLARE @Count INT = 0;

WHILE @Count < 1000 BEGIN

UPDATE dbo.bigTransactionHistoryTEST

SET Quantity = Quantity + 1

WHERE TransactionID BETWEEN @Count * 1000 + 1 AND (@Count + 1) * 1000;

SET @Count = @Count + 1;

END;

Again, I ran loops. In this case, it’s 1000 updates of 1000 rows Updating the uncompressed table took 1 minute and 48 seconds. Updating the row-compressed table took 1 minute and 36 seconds. The page-compressed table was much slower at 2 minutes and 21 seconds. During the slides, I mentioned that Microsoft recommends page compression for workloads that have few updates, and you can see that the updates are slower for page compression.

To summarize, I saw decent space savings, especially with page compression. The compressed tables performed better for both selects and inserts. Updates, however, performed noticeably worse with page compression.

a7

Summary:

This Article is an overview of data compression. Compression is now available in all editions of SQL Server, starting with 2016 SP1. You can implement row and page level compression, and also use the new COMPRESS function. Compression can improve performance because of decreased I/O and memory pressure. I hope this article has been informative for you.

Useful Links:

 

Tags: , , , ,

Recover Data lose with fixing corrupted DB

Introduction

While working on the production environment, we as database administrators are exposed to unforeseen problems associated to our SQL Server. This may lead to the crucial data loss, in case we don’t have updated backup solution. Such situations are not less than the nightmare for DBA’s when recovery of the complete data becomes their utmost responsibility

Problem

I will show here two corrupted issues one in normal DB and another one in DB with MEMORY_OPTIMIZED_DATA filegroup

the suspect mode is one of the common issues that can lead us for data loss and in my case I don’t have a backup for this DB, and I am committed to returning the DB online without any data loss after more research and investigation I tested two solutions

 

Before going to the two solutions deeply and how we can repair the DB I will simulate the issue by creating new DB and deleting some record from the log files using (Freeware Hex Editor XVI32) you can download it from here. http://www.chmaas.handshake.de/delphi/freeware/xvi32/xvi32.htm

Preparing the corrupted DB’s

  • First issue: Normal DB I’m going to create a simple database to use, called DemoSuspect_corrupted_DB with a table and some random data.

USE [master];

GO

CREATE DATABASE [DemoSuspect_corrupted_DB];

GO

USE [DemoSuspect_corrupted_DB];

GO

CREATE TABLE [Employees] (

[FirstName]   VARCHAR (20),

[LastName]    VARCHAR (20),

[YearlyBonus] INT);

GO

INSERT INTO [Employees] VALUES (‘Paul’, ‘Randal’, 10000);

INSERT INTO [Employees] VALUES (‘Kimberly’, ‘Tripp’, 10000);

GO

— Simulate an in-flight transaction

BEGIN TRAN;

UPDATE

[Employees]

SET

[YearlyBonus] = 0

WHERE

[LastName] = ‘Tripp’;

GO

 

— Force the update to disk

CHECKPOINT;

GO

 

Open stop the SQL Server services using below T-SQL or manually

SHUTDOWN WITH NOWAIT;

GO

 

  • Second issue: I will use here one of Microsoft DB sample for SQL Server 2016 version (WideWorldImporters-Full.bak) you can download the DB backup file from HERE, and I will restore the DB on my SQL Server instance with creating a table and some random data

USE [WideWorldImporters];

GO

CREATE TABLE [Employees] (

[FirstName]   VARCHAR (20),

[LastName]    VARCHAR (20),

[YearlyBonus] INT);

GO

INSERT INTO [Employees] VALUES (‘Paul’, ‘Randal’, 10000);

INSERT INTO [Employees] VALUES (‘Kimberly’, ‘Tripp’, 10000);

GO

— Simulate an in-flight transaction

BEGIN TRAN;

UPDATE

[Employees]

SET

[YearlyBonus] = 0

WHERE

[LastName] = ‘Tripp’;

GO

 

— Force the update to disk

CHECKPOINT;

GO

Open stop the SQL Server services using below T-SQL or manually

SHUTDOWN WITH NOWAIT;

GO

 

Now the services offline let us delete files from log file from the two DB’s to corrupt the DB’s I will do it in one DB, and you can do the same scenario on the second DB

  • Download the (Freeware Hex Editor XVI32) from HERE
  • After installation take a copy from your DB log file in another location
  • Click File >> New >> insert >> Select the copy log file for example Ldf
  • Delete some files are and save the changes over the same log file you take it copy
  • Replace this log file by the source one
  • Now the log file of the DB damaged
  • Restart the SQL instance to return it online
  • You will find the DB in SUSPECT mode

Read the rest of this entry »

 
Leave a comment

Posted by on June 17, 2018 in DB corruption

 

Tags: , , , , ,

SQL Server Torn Page Repair: Without Worrying About Data Loss or Corruption

It can often be bugs and errors in fetching data from the server. This can leave system administrators baffled as all work comes to a standstill. Incomplete or incorrect transaction cause confusion among employees working on the different sections of a database. Read the rest of this entry »

 

Repair Corrupt SQL Database with Advanced Recovery Option

Corruption of your SQL Server database can lead to the situation where its contents are destroyed in case the issue is left unattended when the recovery was still possible to be executed. To repair corrupt SQL database you must always ensure that you plan your disaster recovery solution Read the rest of this entry »

 
Leave a comment

Posted by on October 4, 2017 in problems, SQL Server 2012

 

Methods of protection and recommendations from threats and dangers of a Ransomware virus

I would like to inform you that there are warnings and risks on our servers, Computers based on the statements published today in more than one website and published through (Microsoft, NCSC,..ETC) the emergence of Ransomware virus.

This virus attacked many of PC,s, and servers over all the world that’s why We need necessarily to do the below things ASAP

 

WhatsApp Image 2017-05-13 at 3.18.07 PM

Read the rest of this entry »

 
1 Comment

Posted by on May 14, 2017 in General topics

 

Hybrid Cloud and Hekaton Features in SQL Server 2014

Introduction

Microsoft SQL Server 2014 is considered to be the first version that supports Hybrid Cloud by adding a lot of exciting new features.

In this article, I will cover some of the top new features in these main points including Hekaton and Hybrid Cloud enhancements:

Hekaton

Hekaton is the code name of the new feature of In-Memory OLTP. It is a new database engine, fully integrated with SQL server and designed to enhance memory resident data and OLTP workloads. In simple words, with Hekaton we can store the entire table in memory.

Let’s list some of the benefits of this new feature:

  • Memory-Optimized-Tables can be accessed using T-SQL like Disk-Based-Tables.
  • Both of Memory-Optimized-Tables and Disk-Based-Tables can reference in the same query, and also we can update both types of tables by one transaction.
  • Stored procedures that only reference Memory-Optimized-Tables can natively compile into machine code which results in improving performance.
  • This new engine designed for a high level of session concurrency for OLTP transactions.

There are still some limitations for Memory-Optimized-Tables in SQL server 2014 which are:

  • ALTER TABLE statement, SP_RENAME stored procedure, ALTER BUCKET_COUNT statement, and add\remove index outside statement of CREATE TABLE, all of these not supported by In-Memory table
  • Some constraints not supported like (CHECK, FOREIGN KEY, UNIQUE)
  • RANGE INDEXES and TRIGGERS not supported by In-Memory table
  • REPLICATION, MIRRORING, and LINKED SERVERS are incompatible with Memory-Optimized-Tables.

To know more information, you can check SQL Server Support for In-Memory OLTP.

Memory-Optimized-Tables are appropriate for the following scenarios:

  • A table has a high insertion rate of data from multiple concurrent sources
  • A table cannot meet scale-up requirements for high performance of reading operations especially with periodic batch inserts and updates
  • Intensive logic processing inside a stored procedure
  • A database solution cannot achieve low latency business transaction

Let’s now go through the steps to create a Memory-Optimized-Table

 

For more information check the source article from HERE

 
Leave a comment

Posted by on March 1, 2017 in General topics

 
 
%d bloggers like this: