RSS

Category Archives: General topics

General topics in SQL Server

Query Store for Solving Query Performance Regressions

Query Store for Solving Query Performance Regressions

Query performance is a very important area of SQL server. We always have badly performance queries around.

Query store is the newest tool for tracking and resolving performance problems in SQL server.

In this article, we are going to have a look at some practical uses of SQL server Query store.

What is Query Store?

The query store has been described by Microsoft as a ‘flight data recorder’ for SQL server queries. It tracks the queries run against the database, recording the details of the queries and their plans and their runtime characteristics. Query store is per database feature and runs automatically. Once turned on, nothing further needs to be done to get it to track data. It runs in the background collecting data and storing it for later analysis.

Query store is available in SQL Server 2016 and later, and Azure SQLDB v12 and later. It is available in all editions of SQL server, even in Express edition.

How is Query store different from other tracking options?

We have had query performance tracking for some time though in the form of dynamic management views. Mostly, sys.dm_exec_query_stats and sys.dm_exec_query_plan and tracing tools like SQL server profiler and extended events.

So, what makes Query Store different? Let me start answering that by describing a scenario that I encountered a couple of years ago.

A particular critical system was suddenly performing badly. It had been fine the previous week and there have been no extended events sessions or profiler traces running historically. The admin had restarted the server when the performance problem started, just to make sure it was not something related to a pending reboot.

As such, there was no historical performance data at all and solving the problem of what happened, why the query performance is different this week was extremely difficult.

1

Query store would have solved the problem of the lack of historical data. Once turned on, query store tracks query performance automatically. The data collected is persisted into the user database and hence -unlike with the DMVs- it is not lost on a server restart.

Since the query store data is persisted into the user database, it is included in the backups of that database as well. This makes it much easier to do a performance analysis in somewhere other than the production server now.

2

What exactly is a query performance regression?

A dictionary does not help much here. The Oxford dictionary defines regression as returning to an earlier state which is definitely not relevant here.

A performance regression occurs when a query has degraded in performance over time. This degradation may be sudden or it may be gradual. It is probably more common for regression to be sudden. The degradation may be permanent or it may at a later point returned to the previously accepted behavior.

What causes a regression?

A common cause of a query performance regression is a plan change. Let’s briefly talk about the query optimization on the plan caching process to see why.

TSQL is a declarative language, the query written expresses the desired results, not the process of getting to those results. It is up to SQL server engine to figure out how to get those results and the portion of the engine that figures that out is the query optimizer which takes the query and outputs a query plan. A theoretical operation that the query process that can then execute to obtain the desired results. For anything other than a trivial query there are multiple different plan shapes that can produce the same results but differ in the internal details and differ in how long it will take to execute that plan. In theory, the optimizer will always take a query and produce a good plan –if not the fastest possible plan- is fast enough. However, that is not always the case and it is also perfectly possible for a query to have a plan that is fast for some parameter values and really slow for other parameter values. Then there is plan caching which is another layer of complexity. Optimization is an expensive process so SQL server caches the execution plans. When the query executes again it can fetch the plan from the cache and execute it without the need for the cost of the optimization process.

3

There are many minor causes the plan changes like:

  • Bad parameter sniffing.
  • Out of date statistics.
  • Bad query patterns.
  • Overly-complicated queries.

These all tend to cause temporary performance regression.

Sometimes the regression can be caused by data growth which will be persistent.

Also, code changes or schema changes can cause a performance regression.

Tracking and diagnosing query performance regressions with the query store:

First of all, we should enable query store option using the following statement:

ALTER DATABASE [SQLSHACK_Demo] SET QUERY_STORE = ON;

Then whenever you encounter any query performance problems with your application, simply you can open your database in SQL server management studio and expand out the query store folder and then open the regressed query report.

4

You will see the report with the default configurations which indicates total for the duration but this is not ideal. What we want is to check for regression in CPU time because that eliminates cases of blocking and then indicates the appropriate intervals that you want to investigate in:

5

You can change these configurations of the data viewed if you clicked on configure button on the top right of the report to get the following page:

6

Now, we can see how it behaved over time and we can see that this query has two plans associated with it.

And if we hovered over the bars of the paragraph in the top left corner we can see the query behavior in the recent and historic intervals and we can see how they differ.

7

We can also see how many plans the query has on the top right paragraph if we clicked on the query bar and if we clicked on any of the plans shown we can see its graphical display down on the window.

Fortunately, you can select both plans and click on compare plans button to compare them.

8

In our case here, we can conclude that we definitely do have a case of a bad parameter sniffing. We have got two plans. One is appropriate for all executions and one that was generated by a ‘’NULL’’ parameter value and is not suitable for the majority of executions for this query. I know that we can fix this later.

But, what if we need to fix this now. Here comes query store to show the easiest way to do this by choosing the fastest plan and click force plan button.

From that point onwards, the query will be executed with the forced plan no matter what parameter values the query is compiled with. So the application now is performing well.

Now we want to do further analysis to identify why this happened and how to prevent it in a long-term without resulting to plan forcing. This is kind of investigation we do not really want to do on a production server. And here also query store introduces the easiest way to do that by just backing up and restoring the production database to our test environment and then have a look at our query store regressed queries report again.

Summary:

Query performance regression can face any DBA every day so you had to know what caused it and how to track that regression over time. This article was to show how much easier query store makes it. I hope this article has been informative for you.

Useful Links:

 

Tags: ,

Methods of protection and recommendations from threats and dangers of a Ransomware virus

I would like to inform you that there are warnings and risks on our servers, Computers based on the statements published today in more than one website and published through (Microsoft, NCSC,..ETC) the emergence of Ransomware virus.

This virus attacked many of PC,s, and servers over all the world that’s why We need necessarily to do the below things ASAP

 

WhatsApp Image 2017-05-13 at 3.18.07 PM

Read the rest of this entry »

 
1 Comment

Posted by on May 14, 2017 in General topics

 

Hybrid Cloud and Hekaton Features in SQL Server 2014

Introduction

Microsoft SQL Server 2014 is considered to be the first version that supports Hybrid Cloud by adding a lot of exciting new features.

In this article, I will cover some of the top new features in these main points including Hekaton and Hybrid Cloud enhancements:

Hekaton

Hekaton is the code name of the new feature of In-Memory OLTP. It is a new database engine, fully integrated with SQL server and designed to enhance memory resident data and OLTP workloads. In simple words, with Hekaton we can store the entire table in memory.

Let’s list some of the benefits of this new feature:

  • Memory-Optimized-Tables can be accessed using T-SQL like Disk-Based-Tables.
  • Both of Memory-Optimized-Tables and Disk-Based-Tables can reference in the same query, and also we can update both types of tables by one transaction.
  • Stored procedures that only reference Memory-Optimized-Tables can natively compile into machine code which results in improving performance.
  • This new engine designed for a high level of session concurrency for OLTP transactions.

There are still some limitations for Memory-Optimized-Tables in SQL server 2014 which are:

  • ALTER TABLE statement, SP_RENAME stored procedure, ALTER BUCKET_COUNT statement, and add\remove index outside statement of CREATE TABLE, all of these not supported by In-Memory table
  • Some constraints not supported like (CHECK, FOREIGN KEY, UNIQUE)
  • RANGE INDEXES and TRIGGERS not supported by In-Memory table
  • REPLICATION, MIRRORING, and LINKED SERVERS are incompatible with Memory-Optimized-Tables.

To know more information, you can check SQL Server Support for In-Memory OLTP.

Memory-Optimized-Tables are appropriate for the following scenarios:

  • A table has a high insertion rate of data from multiple concurrent sources
  • A table cannot meet scale-up requirements for high performance of reading operations especially with periodic batch inserts and updates
  • Intensive logic processing inside a stored procedure
  • A database solution cannot achieve low latency business transaction

Let’s now go through the steps to create a Memory-Optimized-Table

 

For more information check the source article from HERE

 
Leave a comment

Posted by on March 1, 2017 in General topics

 

How to analyze Storage Subsystem Performance in SQL Server

introduction

To improve performance, it is common for DBAs to search in each aspect except analyzing storage subsystem performance even though in many times, issues are, in fact, caused by poor storage subsystem performance. Therefore, I want to give you some tools and recommendation that you can use it to prevent your storage subsystem from being a performance issue for you.

In this article, I will cover how to measure and analyze your storage subsystem performance and how to test your storage subsystem including

  1. Main metrics for storage performance
  2. Operating System Tools to measure storage performance
  3. SQL Server Tools to measure storage performance
  4. Using SQL Server to test storage performance

Main metrics for storage performance:

In this section I will introduce the three main metrics for the most storage performance issues as follows:

  1. Latency
    Each IO request will take some time to complete this latency is measured in milliseconds (ms) and should be as low as possible
  2. IOPS
    IOPS means IO operations per second, which means the amount of reading or write operations that could be done in one second. A certain amount of IO operations will also give a certain throughput of Megabytes each second, so these two are related
  3. Throughputs
    The most common value from a disk manufacturer is how much throughput a certain disk can deliver. This number usually expressed in Megabytes / Second (MB/s), and it is simple to believe that this would be the most important factor

For More information please check the source article from HERE

 
Leave a comment

Posted by on March 1, 2017 in General topics

 

SQL Server performance – measure Disk Response Time

Introduction

As DBAs, we all get to the point where we are asked to setup a new server for a particular environment. Setting up a new server is not a big thing but giving the answer to the question that “how well it will work” might be tricky.

There are tons of items which we can set up to measure how well the newly installed server will receive a response, but here I will discuss one of the most valuable resources of the server “Disk.” Most often the disk is not measured correctly, or I have seen environments where the disk response time has never been measured. I will discuss here a tool from Microsoft which is very handy and can solve your problem very quickly. The diskspd.exe!

It’s the superseding version of SQLIO which was previously used to measure IO response time for the disk. The source code of diskspd.exe is hosted on GitHub. You can download this free utility from Microsoft’s website using this link.

After you download the utility, you will get a zip file. Just unzip the file, and it will give you the folders and files as shown in the below screenshot. You will-will need the eye of diskspd inside the folder “amd64fre” if you have a SQL Server 64-bit version (most of us will be having this).

To complete this article please check it here

 

word-image-29

 
Leave a comment

Posted by on February 16, 2017 in General topics, SQL Server 2014

 

Tags: , , , , ,

Overview On SQL Server Database Integrity Check

Problem

Sometimes users may have numerous of databases in their SQL Server. However, while working on database they may face a database integrity issue, as there can be integrity on several parts of a database i.e. Table, database, etc.

Solution

In general, the integrity of SQL Server database task includes the complete checking allocation as well as structure of integrity of the entire object in the stated database. It contains various checking database indexes as discussed below.

Determine Requirements

Firstly, check all the databases at high level periodically for the SQL Server database consistency errors. If there, is a large database then, check it table by table. Moreover, running all these on a set of schedule, it is important to run, as there are the chances of system failure, disk drive failures, etc.

If the database is small then, it is easy to run all the checks across the whole database. In SQL Server 2000 and 2005, all the maintenance plans are the only way for performing DBCC CHECKDB that makes easy in checking the complete database. As it is already discussed above that, it is a part of normal database maintenance process. If the user is not running the checks for SQL Server database integrity or if they are not sure then, take a time for implementation of all these procedures.

Useful Commands

There are mainly four different commands, which can be used for checking different levels of consistency that are there in the database as mentioned:

  • DBCC CHECKALLOC– Check the consistency of structures of disk space allocation, which are there for specified database.
  • DBCC CHECKCATALOG– Check for consistency of catalog, which is therein specified database.
  • DBCC CHECKDB– Check for consistency of catalog that is in specified database.
  • DBCC CHECKTABLE– Check for the integrity of all pages as well as structures, which makes indexed and table view.

DBCC CHECKDB command is widely used by almost all of the users as it checks the complete database. Whereas all the other options are useful for the quick spot-checking. All these options can be used for checking the consistency of particular portion of data and repairing any sort of corruption, which is found.

Running

Ad Hoc at the most basic level of all the commands can be executed interactively against a database. It is always suggested to run these commands during the off hours as these commands issue some problem while executing.

Maintenance Plans as discussed above that SQL Server 2000 and 2005 both can set the maintenance plans for running DBCC CHECKDB command. This is the only way to have the base options of maintenance plans.

Custom Jobs can be created on its own SQL Server Agent jobs for running the desired commands against the database.

Scheduling

The most beneficial time to run these commands is during the low usage times. SQL Server Agent is the best way to schedule jobs during any time. If the hardware is stable then, there is not any issue for running these commands. If user is facing regular hardware problems then, it is the best way for running all these checks frequently.

Output

There are many installations where the plans of maintenance are setup and SQL Server database integrity check is run. The problem is that no one checks the output that results in the corruption issues. For this, users need to check for the output from the commands that make sure that there are no issues. This can be done by reporting option with maintenance plans and by checking SQL Server logs error. Every time, one DBCC command is run and an entry level is made in SQL Server error log.

Handling Issues

If there is corruption in the database then, users want to have the correct action for the elimination for these issues. This is done by utilizing the option with DBCC CHECKLOC, DBCC CHECKDB, and DBCC CHECKTABLE.

Conclusion

After understanding, the users issue that they face while checking for the SQL Server data integrity. In the above discussion, we have discussed the way for SQL Server database integrity check that makes easy for users in resolving this issue.

 
Leave a comment

Posted by on January 9, 2017 in General topics

 

Tags:

SQL Injection

sqli

SQL Injection (SQLi) refers to an injection attack wherein an attacker can execute malicious SQLstatements (also commonly referred to as a malicious payload) that control a web application’s database server. Such attack affects any website or web application.

An attacker can bypass a web application’s authentication and authorization mechanisms and retrieve the contents of an entire database. SQL injection can also be used to add, modify and delete records in a database.

In a 2012 study, it was observed that the average web application received 4 attack campaigns per month, and retailers received twice as many attacks as other industries.

A SQL injection needs two conditions to exists :

  • A relational database that uses SQL
  • A user controllable input which is directly used in an SQL query.

Subclasses of SQLi –

  1. Classic SQLi
  2. Blind or Inference SQLi
  3. Database management system-specific SQLi
  4. Compounded SQLi

Example—

sql-example-1

Here user need to provide user name and password, if attacker provides ‘or 0=0’ as the username and password then the query will be like this.

sql-example-3

Since the inputs provided by the attacker are valid in all circumstances, the query will return all records in the database.

And by this way an attacker will be able to view the sensitive information.

How to prevent SQLi—

  • Adopt an input validation technique where user input is checked against a set of rules.
  • Users should have least privileges on the database.
  • Don’t use ‘SA’ accounts for web applications.
  • Need to have application specific database user accounts.
  • Remove all stored procedures which are not in use.

————–

 
 
 
%d bloggers like this: