Corruption of your SQL Server database can lead to the situation where its contents are destroyed in case the issue is left unattended when the recovery was still possible to be executed. To repair corrupt SQL database you must always ensure that you plan your disaster recovery solution Read the rest of this entry »
I would like to inform you that there are warnings and risks on our servers, Computers based on the statements published today in more than one website and published through (Microsoft, NCSC,..ETC) the emergence of Ransomware virus.
This virus attacked many of PC,s, and servers over all the world that’s why We need necessarily to do the below things ASAP
Microsoft SQL Server 2014 is considered to be the first version that supports Hybrid Cloud by adding a lot of exciting new features.
In this article, I will cover some of the top new features in these main points including Hekaton and Hybrid Cloud enhancements:
Hekaton is the code name of the new feature of In-Memory OLTP. It is a new database engine, fully integrated with SQL server and designed to enhance memory resident data and OLTP workloads. In simple words, with Hekaton we can store the entire table in memory.
Let’s list some of the benefits of this new feature:
- Memory-Optimized-Tables can be accessed using T-SQL like Disk-Based-Tables.
- Both of Memory-Optimized-Tables and Disk-Based-Tables can reference in the same query, and also we can update both types of tables by one transaction.
- Stored procedures that only reference Memory-Optimized-Tables can natively compile into machine code which results in improving performance.
- This new engine designed for a high level of session concurrency for OLTP transactions.
There are still some limitations for Memory-Optimized-Tables in SQL server 2014 which are:
- ALTER TABLE statement, SP_RENAME stored procedure, ALTER BUCKET_COUNT statement, and add\remove index outside statement of CREATE TABLE, all of these not supported by In-Memory table
- Some constraints not supported like (CHECK, FOREIGN KEY, UNIQUE)
- RANGE INDEXES and TRIGGERS not supported by In-Memory table
- REPLICATION, MIRRORING, and LINKED SERVERS are incompatible with Memory-Optimized-Tables.
To know more information, you can check SQL Server Support for In-Memory OLTP.
Memory-Optimized-Tables are appropriate for the following scenarios:
- A table has a high insertion rate of data from multiple concurrent sources
- A table cannot meet scale-up requirements for high performance of reading operations especially with periodic batch inserts and updates
- Intensive logic processing inside a stored procedure
- A database solution cannot achieve low latency business transaction
Let’s now go through the steps to create a Memory-Optimized-Table
For more information check the source article from HERE
To improve performance, it is common for DBAs to search in each aspect except analyzing storage subsystem performance even though in many times, issues are, in fact, caused by poor storage subsystem performance. Therefore, I want to give you some tools and recommendation that you can use it to prevent your storage subsystem from being a performance issue for you.
In this article, I will cover how to measure and analyze your storage subsystem performance and how to test your storage subsystem including
- Main metrics for storage performance
- Operating System Tools to measure storage performance
- SQL Server Tools to measure storage performance
- Using SQL Server to test storage performance
Main metrics for storage performance:
In this section I will introduce the three main metrics for the most storage performance issues as follows:
Each IO request will take some time to complete this latency is measured in milliseconds (ms) and should be as low as possible
IOPS means IO operations per second, which means the amount of reading or write operations that could be done in one second. A certain amount of IO operations will also give a certain throughput of Megabytes each second, so these two are related
The most common value from a disk manufacturer is how much throughput a certain disk can deliver. This number usually expressed in Megabytes / Second (MB/s), and it is simple to believe that this would be the most important factor
For More information please check the source article from HERE
Microsoft SQL Server 2012 introduces many features that help database administrators, database developers, and BI developers.
In this article, I will cover some of the new features for database developers in these main points:
- Database Engine Improvements
- Improvements to SQL Server Management Studio Debugging
- Changes to the Scope of Objects
Database Engine Improvements:
- File Tables:When we open up SQL Server Management Studio, one of the first changes we notice for SQL Server 2012 is the addition of a new type of table called a File Table.
File table allows us to make a connection between windows share and a database table such that any file that appears in the share will become a row item in the table.
It allows us to run queries that tell us how many files we have in that shared location, what type of files, what size the files are, etc….
Setting this up is a multiple step process:
- Enable file stream: is The first step we have to do at the instance level.We will do that with the configuration manager tool, open properties of the instance we are interested in, and there is a tab for file stream, in there we should click on all of the checkboxes(enabling all features) as below
To complete the article check it HERE
As DBAs, we all get to the point where we are asked to setup a new server for a particular environment. Setting up a new server is not a big thing but giving the answer to the question that “how well it will work” might be tricky.
There are tons of items which we can set up to measure how well the newly installed server will receive a response, but here I will discuss one of the most valuable resources of the server “Disk.” Most often the disk is not measured correctly, or I have seen environments where the disk response time has never been measured. I will discuss here a tool from Microsoft which is very handy and can solve your problem very quickly. The diskspd.exe!
It’s the superseding version of SQLIO which was previously used to measure IO response time for the disk. The source code of diskspd.exe is hosted on GitHub. You can download this free utility from Microsoft’s website using this link.
After you download the utility, you will get a zip file. Just unzip the file, and it will give you the folders and files as shown in the below screenshot. You will-will need the eye of diskspd inside the folder “amd64fre” if you have a SQL Server 64-bit version (most of us will be having this).
To complete this article please check it here
Sometimes users may have numerous of databases in their SQL Server. However, while working on database they may face a database integrity issue, as there can be integrity on several parts of a database i.e. Table, database, etc.
In general, the integrity of SQL Server database task includes the complete checking allocation as well as structure of integrity of the entire object in the stated database. It contains various checking database indexes as discussed below.
Firstly, check all the databases at high level periodically for the SQL Server database consistency errors. If there, is a large database then, check it table by table. Moreover, running all these on a set of schedule, it is important to run, as there are the chances of system failure, disk drive failures, etc.
If the database is small then, it is easy to run all the checks across the whole database. In SQL Server 2000 and 2005, all the maintenance plans are the only way for performing DBCC CHECKDB that makes easy in checking the complete database. As it is already discussed above that, it is a part of normal database maintenance process. If the user is not running the checks for SQL Server database integrity or if they are not sure then, take a time for implementation of all these procedures.
There are mainly four different commands, which can be used for checking different levels of consistency that are there in the database as mentioned:
- DBCC CHECKALLOC– Check the consistency of structures of disk space allocation, which are there for specified database.
- DBCC CHECKCATALOG– Check for consistency of catalog, which is therein specified database.
- DBCC CHECKDB– Check for consistency of catalog that is in specified database.
- DBCC CHECKTABLE– Check for the integrity of all pages as well as structures, which makes indexed and table view.
DBCC CHECKDB command is widely used by almost all of the users as it checks the complete database. Whereas all the other options are useful for the quick spot-checking. All these options can be used for checking the consistency of particular portion of data and repairing any sort of corruption, which is found.
Ad Hoc at the most basic level of all the commands can be executed interactively against a database. It is always suggested to run these commands during the off hours as these commands issue some problem while executing.
Maintenance Plans as discussed above that SQL Server 2000 and 2005 both can set the maintenance plans for running DBCC CHECKDB command. This is the only way to have the base options of maintenance plans.
Custom Jobs can be created on its own SQL Server Agent jobs for running the desired commands against the database.
The most beneficial time to run these commands is during the low usage times. SQL Server Agent is the best way to schedule jobs during any time. If the hardware is stable then, there is not any issue for running these commands. If user is facing regular hardware problems then, it is the best way for running all these checks frequently.
There are many installations where the plans of maintenance are setup and SQL Server database integrity check is run. The problem is that no one checks the output that results in the corruption issues. For this, users need to check for the output from the commands that make sure that there are no issues. This can be done by reporting option with maintenance plans and by checking SQL Server logs error. Every time, one DBCC command is run and an entry level is made in SQL Server error log.
If there is corruption in the database then, users want to have the correct action for the elimination for these issues. This is done by utilizing the option with DBCC CHECKLOC, DBCC CHECKDB, and DBCC CHECKTABLE.
Quick Solution to Repair Corrupt File:
If your database files is severely corrupted, then you can go for an automated solution to repair corrupted database files. SQL Repair Tool repair corrupted MDF/NDF file of SQL version 2017, 2016 & all its below versions. It also help you in recovering deleted database records.
After understanding, the users issue that they face while checking for the SQL Server data integrity. In the above discussion, we have discussed the way for SQL Server database integrity check that makes easy for users in resolving this issue.