logo

Recommendations on maintaining and configuring MSSQL servers

This article features the following information:

  1. Updating an MSSQL Server Instance.
  2. Maintaining indexes (reorganizing/rebuilding) and updating statistics.
  3. Fine-tuning and optimizing MSSQL Server. (Installation, initial set up, shrinking the transaction log).
  4. Working with Performance Counters to troubleshoot issues, related to the server instance performance.

1. Updating an MSSQL Server Instance

You should always control and update the MSSQL Server instance in order to eliminate vulnerabilities and update the Database Engine. To do so, follow these steps.

1. Find out what server instance edition and version you have installed:

2. Go to https://technet.microsoft.com/en-us/sqlserver/ff803383.aspx and look for available updates for the installed server instance version. The most important are:

  • Latest Service Pack
  • Latest Cumulative Update


3. If such updates are available, download and install them. The installation process is standard: run the .exe file and install the updates for the selected MSSQL Server instance.

2. Maintaining indexes (reorganizing/rebuilding) and updating statistics

To maintain the database performance, you should regularly reorganize/rebuild indexes. Otherwise, you can end up with performance degradation for the entire database.

First, check the fragmentation of indexes in the database. For this, execute the following code:

USE ELMA_SOURCE
 
SELECT s.[name] +'.'+t.[name] AS table_name
 ,i.NAME AS index_name
 ,index_type_desc
 ,ROUND(avg_fragmentation_in_percent,2) AS avg_fragmentation_in_percent
 ,record_count AS table_record_count
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'SAMPLED') ips
INNER JOIN sys.tables t on t.[object_id] = ips.[object_id]
INNER JOIN sys.schemas s on t.[schema_id] = s.[schema_id]
INNER JOIN sys.indexes i ON (ips.object_id = i.object_id) AND (ips.index_id = i.index_id)
ORDER BY avg_fragmentation_in_percent DESC

The code execution result is presented in this figure:

You can see the following columns: table_name, index_name, index_type_desc, avg-fragmentation_in_percent, table_record_count.

First of all, pay attention to the indexes with fragmentation higher than 15%. You should reorganize indexes with fragmentation from 15 to 30%, and rebuild the indexes with fragmentation over 30%. 0-15% fragmentation is normal. Also, keep in mind that even after reorganizing/rebuilding indexes, the fragmentation of some indexes may remain above normal. Usually, it occurs with indexes in tables with small amounts of entries (up to 1000 rows).

The next step is to constantly maintain indexes and monitor their fragmentation. You can use standard tools of MSSQL Server (Create a maintenance plan – available for Standart and Enterprise editions, but not for the Express edition). Create a maintenance plan with the necessary tasks for checking the database integrity and reorganizing/rebuilding indexes; assign time and periodicity for repeating these tasks.

There is another great tool – scripts https://ola.hallengren.com/, which are globally acknowledged and used by numerous DBA for everyday database maintenance. Follow these steps:

1. Download the script you need from the official website.

2. After downloading, run it in the Management Studio environment. The script initializes and creates several required stored procedures.

After that, you need to set up jobs and configure their execution time and periodicity, or create a maintenance plan and add the created jobs to it.

ola.hallengren.com scripts allow automating the process of reorganizing/rebuilding indexes. When executing a job, the index fragmentation is checked automatically. If the fragmentation is 15-30%, indexes are reorganized; if it is more than 30%, indexes are rebuilt (the database load is controlled).

Using other scripts, you can also check database integrity and set up automatic backup copying, which is very useful and even necessary. All the parameters can be configured in the script itself.

Here is an example of creating a job.

Create a job in MSSQL Server.

Add a script for reorganizing/rebuilding indexes. 

EXECUTE dbo.IndexOptimize
@Databases = 'ELMA_SOURCE',
@FragmentationLow = NULL,
@FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationLevel1 = 5,
@FragmentationLevel2 = 30,
@UpdateStatistics = 'ALL',
@OnlyModifiedStatistics = 'Y'
EXECUTE dbo.DatabaseIntegrityCheck
@Databases = 'ELMA_SOURCE',
@CheckCommands = 'CHECKDB'

You can write a script for checking the database integrity.

In a similar way, it is possible to configure backup copying. Script samples are available on ola.hallengren.com. After that, you need to start the job and set its time and periodicity, or use a service plan, select the created jobs in it and save them. For instance, you can use the following weekly (on Sundays) maintenance plan.

 
                                                                                                                                             

3. Fine-tuning and optimizing MSSQL Server. (Installation, initial set up, shrinking the transaction log)

When installing an MSSQL Server instance, you should select only those components that you really need. Follow the principle "the fewer the better". You can always install missing components later.

Leave the path to the server instance by default, as well as the system databases folder (master, model, msdb). For example:

С:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\DATA\

where MSSQL14.SQLEXPRESS is the server instance folder.

User databases should be placed to another drive. It is good practice to create three logical disks and store the main database file on one of them, the transaction log on the second and the tempDB database on the third.

If your hardware allows it, ideally, the database file, the transaction log and the tempDB database should be stored on different physical disks to increase performance.

It is advisable to allocate a separate machine for the MSSQL Server instance if possible! If you are using virtualization, allocate a separate virtual machine.

Usually, the maximum server memory is defined for an installed MSSQL Server instance, instead of leaving the default dynamic value. For example, if a server is allocated for the MSSQL Server instance and it has 32 GB RAM, you can set the value to 30 GB, leaving 2GB for the operating system and antimalware software.

You also should control that the Auto Shrink option is disabled in the Database settings because it can have a significant impact on the general database performance.

 

You should also manage the transaction log, which may negatively affect performance. If the transaction log is bigger than the data file, this is bad and you have to take measures.

Any transaction log contains virtual log files (VLF). The more VLF there is, the worse it is for the transaction log. Best practice is to keep the number of VLF under 50. If there are more than 1000 VLF in the transaction log, you must take measures for reducing the number of virtual log files and later keeping it within the normal range. To check the current number of virtual log files, use the DBCC LOGINFO command.


A small autogrowth value for the transaction log may also be the reason. As the result, new virtual log file portions are created. You can change the autogrowth value in the database properties, on the Files page.

Or with Transact SQL instructions:

--Instructions 1--
Use ELMA_SOURCE
GO
DBCC LOGINFO
--Instructions 2--
DBCC SHRINKFILE (ELMA_log, 1)
GO
DBCC LOGINFO
--Instructions 3--
BACKUP LOG ELMA_SOURCE TO DISK = 'D:\test\ELMA_SOURCE_backup_log.trn'
GO
DBCC LOGINFO
--Instructions 4--
ALTER DATABASE ELMA_SOURCE
MODIFY FILE
	(
	NAME = ELMA_SOURCE_log,
	FILEGROWTH = 1000
	)
GO
DBCC LOGINFO

In this example, autogrowth is set to 1GB. It is recommended that you define autogrowth values as maximum value instead of percent.

4. Working with Performance Counters to troubleshoot issues, related to the server instance performance

Constant monitoring and finding the reasons behind performance degradation and bottlenecks play one of the most important roles in stable MSSQL operation. For this purpose, you can use Performance Counters (available by default in Windows). Performance counters allow obtaining information as diagrams with values history, tracking value changes in real time, comparing values, viewing archived data. You can use third-party monitoring systems such as Zabbix for convenience.

Standard Windows counters are available by default. They allow monitoring such important data as CPU load, RAM and disk usage, and detecting bottlenecks in the server functioning. When installing an MSSQL server instance, special counters are added, necessary for monitoring the MSSQL server itself.

Take a look at an example.

Add the required counters for monitoring.

In this case, they look like this: system + MSSQL Server counters. 

Listed below are best practice performance counters, used for system monitoring:

Memory

 

Available Mbytes

Available memory in megabytes. If this counter shows a value below 200MB, there are memory issues.

Physical Disk

 

Avg.Disc sec/Read _Total

Avg.Disc sec/Write _Total

 

Performance counter PhysicalDisk(_Total) \Avg. Disk sec/Read (Time in seconds, spent on average per one disk data read operation. Shows the average execution time of the disk read operation).

Maximum values should not exceed 15-20 milliseconds.

 

Performance counter PhysicalDisk(_Total) \Avg. Disk sec/Write (Time in seconds, spent on average per one disk data write operation).

Maximum values should not exceed 15-20 milliseconds.

Processor

 

%Processor Time _Total

%Processor Time 0

%Processor Time 1

%Processor Time 2

%Processor Time 3

Average CPU load time (in total and for each core). Check the CPU load and identify, whether it is a bottleneck of the server.

SQLServer Buffer Manager

 

Page life expectancy

Buffer Cache Hit Ratio

 

 

 

 

 

 

 

 

 

 

Performance counter SQL Server: Buffer Manager: Buffer Cache hit ratio (A share of pages, detected in the buffer cache without reading the disk). Reference value > 90 (indicates memory issues).

 

Performance counter SQL Server: Buffer Manager: Page life expectancy (shows the average page life expectancy in the buffer cache). The higher, the better. Maximum value – 300. PLE is the counter you should monitor, but its values are useful if they drop below normal (< 300) and remain there for a long time (indicates memory issues).

SQLServer General Statistics

 

User Connections

The number of users, connected to the MSSQL Server at the moment.

SQLServer Memory Manager

 

Memory Grants Pending

Target Server Memory

Total Server Memory

Total Server Memory – the counter data provide information on the memory, allocated for the server by the memory manager. Target Server Memory – the counter data provide information on the perfect memory size, required by the server. If Total Server Memory is lower than Target Server Memory, it indicates lack of memory.

SQLServer SQL Statistics

 

Batch Request/sec

SQL Compilations/sec

SQL Re-Compilations/sec

Batch Request/sec – the counter shows the SQL server load; if the value is more than 1000 server requests per second, it means that the system load is high.

 

SQL Compilations/sec – SQL compilations per second.

Ideally, the value should be less than 10% of the Batch Request/sec counter value.

 

SQL Re-Compilations/sec – average number of re-compilations per second. The lower the value of this counter, the better.

Ideally, the value should be less than 10 % of the SQL Compilations/sec counter value.

System

 

Processor Queue Length

The current CPU queue length, measured with the number of pending threads. All processors use the same queue, in which threads are waiting to receive processor cycles. If the computer has several processors, divide this value by the number of processors, servicing the load. Constant value > 2, may indicate process overload.

You can find the list of main performance counters and their recommended values on the Microsoft website, or in independent sources.

After creating and launching a data collector, you should collect counter values; you can view reports on all counters in the system monitor at the same time, or for better visual representation, use Zabbix.

Another specific and advantageous feature is that the collected metrics, located in C:\PerfLogs\Admin\MSSQL Collection\DESKTOP-F6195SA_20180802-000001\Performance Counter.blg, can be compared with the collected MSSQL Server Profiler tracings. It allows comparing requests or stored procedures graphically, in order to understand which resources were used and at what load when executing a request or a stored procedure. It looks like this: