Metadata is data that describes other data. SQL Server exposes a large amount of metadata. This includes structural metadata, which describes every object, and descriptive metadata, which describes the data itself. Metadata is exposed through catalog views, information schema views, dynamic management views, dynamic management functions, system functions, and stored procedures. Metadata is the key to automation, as it allows you to create dynamic, intelligent scripts.
I will assume that you are familiar with basic metadata objects, such as sys.tables and sys.databases. Instead, this chapter will focus on interrogating some less commonly known metadata objects that can prove very useful to DBAs. It will demonstrate how these metadata objects can be used to play a key role in automating maintenance routines. It is important to remember, however, that a discussion of all metadata objects available within SQL Server would be worthy of several volumes in its own right. Therefore, I suggest that you do your own research on SQL Server metadata and identify objects that will prove useful to your individual environment. A good place to start is the Dynamic Management Views and Functions page on MSDN, which can be found at https://msdn.microsoft.com/en-us/library/ms188754.aspx .
Creating Intelligent Routines
The following sections will demonstrate how SQL Server metadata can be used, in conjunction with PowerShell, to write intelligent routines that can be used to make your enterprise more consistent and better-performing, with less DBA effort. I will discuss configuring consistent port numbers, profiling instance workloads, using query store metadata, dynamic index rebuilds, and enforcing policies that cannot be enforced using Policy-Based Management.
Configuring a Consistent Port Number
Assuming an instance is configured to use TCP, a default instance of SQL Server will automatically be configured to use Port 1433. (Named instances will be configured to use a dynamic port.) Many DBAs choose to change the port number to a number that is not commonly known. This is a security measure that helps to slow down potential attackers, who may be looking to exploit port 1433. It is also considered good practice in some organizations to avoid using the SQL Browser service. While I do not usually recommend turning off the SQL Browser service, in environments in which this is the policy, I recommend using a consistent port number across all servers, to ease both application development and operational support.
The port number that an instance is running on can be found by looking in the SQL Server log, just after the instance has started, or, more conveniently, it is also exposed through the sys.dm_server_registry DMV. This DMV returns the columns detailed in Table 4-1.
Table 4-1. sys.dm_server_registry Columns
Column | Description |
---|---|
Registry_key | The name of the registry key |
Value_name | The name of the key’s value |
Value_data | The data contained within the value |
The script in Listing 4-1 will iterate through the instances within your SQL Server enterprise and check their port number configuration. It will then alert you if there is an instance configured to listen on the incorrect port. The ServerInstance parameter of the invoke-sqlcmd cmdlet should be specified as serverinstance for named instances and simply server for default instances.
Note
The script in Listing 4-1 pulls a list of server names from an array. In a real-world scenario, these server names would be pulled from your inventory database. Creating an inventory database and using it to assist with the automation effort is discussed in depth in Chapter 6.
Listing 4-1. Checking Port Configuration
import-module sqlps
$Port = "6152"
[array]$Servers = "ESASSMgmt1", "ESProd1", "ESProd2"
foreach ($Server in $Servers)
{
s$Results = invoke-sqlcmd -ServerInstance "$Server" -User "sa" -Password "Pa$$w0rd" -Query "SELECT *
FROM (
SELECT
CASE
WHEN value_name = 'TcpPort' AND value_data <> ''
THEN value_data
WHEN value_name = 'TcpPort' AND value_data = ''
THEN (
SELECT value_data
FROM sys.dm_server_registry
WHERE registry_key LIKE '%ipall'
AND value_name = 'TcpDynamicPorts'
)
END PortNumber
FROM sys.dm_server_registry
WHERE registry_key LIKE '%IPAll' ) a
WHERE a.PortNumber IS NOT NULL ;"
IF ($Results.PortNumber -ne $Port)
{
"The default instance on " + $Server + " is incorrectly configured to listen on Port " + $Results.PortNumber
}
}
"All other instances are correctly configured"
Note
The script will only work if your instance has the default network configuration, which uses IPAll. If the instance is listening on multiple IP addresses, then there will be no port configuration for IPAll.
The script begins by importing the sqlps module, which will allow us to use the invoke-sqlcmd cmdlet. The script then assigns a port number and defines a list of servers. These values would be pulled from your inventory database in a real-world scenario.
The script then uses a foreach loop, to iterate through every server in the array. It runs a query against each instance, which returns the current port number for that instance. The query runs against the sys.dm_server_registry DMV and attempts to return the static port number that is registered against IPAll. If this value is blank, it means that the instance is configured to use dynamic ports and, instead, returns the TCP dynamic port number.
Finally, the script uses an IF statement to determine if the instance’s port number matches the value stored in the $Port variable. If they do not match, it displays a warning.
The invoke-sqlcmd cmdlet operates in a very similar manner to SQLCMD in the command line and can be used to run scripts against an SQL Server instance. While the functionality is very similar, not all functionality has been implemented, as it is not required in the richer PowerShell environment. Table 4-2 lists the parameters that are not implemented in invoke-sqlcmd.
Table 4-2. Parameters Not Implemented
Parameter | Description |
---|---|
-r | Outputs messages to stderr |
-R | Uses client’s regional settings |
-q | Runs the specified query and remains running |
-f | Codes page to use for output data |
-z | Changes a password and remains running |
-a | Packet size |
-s | Column separator |
-h | Controls output headers |
-k | Specifies control characters |
-Y | Fixed-length display width |
-y | Variable length display width |
-e | Echo input |
-I | Enables quoted identifiers |
-W | Removes trailing spaces |
-L | Lists instances |
-u | Formats output as Unicode |
-p | Prints statistics |
-c | Command end |
-E | Connects using Windows Authentication |
Tip
It is worth noting that some of the parameters are case-sensitive.
There are also two parameters that are new in the invoke-sqlcmd implementation that did not appear in the command-line version of SQLCMD. These parameters are listed in Table 4-3.
Table 4-3. New Parameters
Parameter | Description |
---|---|
-EncryptConnection | Connects using SSL encryption. |
-OutputSqlErrors | Displays errors |
Table 4-4 provides a description of the remaining invoke-sqlcmd parameters.
Table 4-4. invoke-sqlcmd Parameters
Parameter | Description |
---|---|
-ServerInstance | The serverinstance name for invoke-sqlcmd to connect to |
-Database | The landing database |
-Query | The query or script that should be run against the database |
-Username | When using second-tier authentication, specifies the security principle to use |
-Password | When second-tier authentication is used, specifies the password associated with the security principle |
-Variable | Specifies variable values that should be passed into the script |
-QueryTimeout | Specifies the query time-out interval |
-AbortOnError | Terminates the script in the event of an error |
-DedicatedAdministratorConnection | Connects using the DAC, which is the “emergency door” into an SQL Server instance that is otherwise inaccessible |
-DisableCommands | Disables interactive commands, startup script, and environment variables |
-DisableVariables | Disables variable substitution |
-SeverityLevel | Specifies the minimum severity level that should be reported |
-ErrorLevel | Specifies the minimum error level that should be reported |
-ConnectionTimeout | Specifies the login time-out interval |
-HostName | Specifies the hostname that the script will run on |
-NewPassword | Changes the password and exits |
-InputFile | Specifies the name and location of a file containing an SQLCMD script that you wish to execute |
-MaxCharLength | Specifies the maximum length of character output |
-MaxBinaryLength | Specifies the maximum length of binary output |
Tip
An important note about invoke-sqlcmd is that the -variable parameter is expecting an array. This provides the ability to pass multiple scripting variables. Therefore, in order to pass scripting variables from PowerShell down to an SQLCMD script, you must first create an array containing the scripting variables and then pass this array into the -variable parameter. This holds true, even if you only have to pass a single variable.
Profiling an Instance Workload
There are occasions when it is important to understand the workload that your instance supports. For example, you may wish to find out the read/write ratio of your instances, across your instances. This can help in tasks such as consolidation planning or when planning the most optimally use of Buffer Cache Extensions.
A snapshot of this ratio can be obtained from the sys.dm_os_bufferdescriptors. It provides a point-in-time snapshot, however, so you will have to use the script at regular intervals, at different days and times, to build a true picture. The sys.dm_os_buffer_descriptorsDMV returns the columns listed in Table 4-5.
Table 4-5. sys.dm_os_buffer_descriptors Columns
Column | Description |
---|---|
database_id | The ID of the database that the page is associated with |
file_id | The ID of the data file that the page is associated with |
page_id | The ID of the page |
page_level | The level of the index that the page is used for |
allocation_unit_id | The ID of the allocation unit, to which the page is associated |
page_type | The type of page |
row_count | The number of rows stored within the page |
free_space_in_bytes | The amount of free space within the page, represented in bytes |
is_modified | Marks if the page is dirty. A value of 0 implies that the page is clean (has not been modified since being read from disk). A value of 1 indicates that the page is dirty (has been modified since being read from disk). |
numa_node | The NUMA (non-uniform memory access) node that the page is associated with |
read_microsec | The time it takes to read the page from disk into its buffer, represented in microseconds |
Every time a page is accessed, it is accessed in memory. All pages accessed are stored in an area of memory reserved by the SQL Server process, called the buffer cache. When a page is requested by a query, it can either be served up directly from the buffer cache, or if it is not already in memory, it is read from disk, into a buffer in the buffer cache and then served up from the cache. The sys.dm_os_buffer_descriptorsDMV returns details of every page that is currently in the buffer cache.
In order to gain an idea of the ratio of reads to writes in our instance, we can make use of the is_modified flag, which indicates if the page is dirty. We will also filter on database_id and page_type. The filter of database_id will prevent system databases skewing the results, and the filter on page_type will prevent the results being skewed by system pages, such as GAM (Global Allocation Map), SGAM (Shared Global Allocation Map), PFS (Page Free Space), ML (Minimally Logged) MAP, and IAM (Index Allocation Map) pages, among others.
The script in Listing 4-2 demonstrates how we can extract the percentages of clean and dirty pages . The inner query uses PIVOT to extract the raw number of pages from sys.dm_os_buffer_descriptors and pivot the results from rows to columns. This makes it easy for the outer query to calculate the percentage of rows, by comparing the count of clean and dirty pages to a count of rows in the entire view.
Tip
A dirty page is a page that has been modified since the last time it was read from disk.
Listing 4-2. Extracting Percentage of Dirty Pages
SELECT
CAST((Clean /
(SELECT CAST(COUNT(*) AS FLOAT)
FROM sys.dm_os_buffer_descriptors b
WHERE page_type IN
('DATA_PAGE', 'INDEX_PAGE', 'TEXT_MIX_PAGE'))) * 100. AS DECIMAL(4,2)
) AS Clean_Percentage
, CAST((
Dirty /
(SELECT CAST(COUNT(*) AS FLOAT)
FROM sys.dm_os_buffer_descriptors b
WHERE page_type IN
('DATA_PAGE', 'INDEX_PAGE', 'TEXT_MIX_PAGE'))) * 100. AS DECIMAL(4,2)
) AS Dirty_Percentage
FROM
(
SELECT 'Count' as is_modified
, [0] AS Clean
, [1] AS Dirty
FROM
(
SELECT is_modified, 1 AS Page_Count
FROM sys.dm_os_buffer_descriptors
WHERE page_type IN ('DATA_PAGE', 'INDEX_PAGE', 'TEXT_MIX_PAGE')
AND database_id > 4 ) SourceTable
PIVOT
(
COUNT(Page_Count)
FOR is_modified IN ([0], [1])
) AS PivotTable
) a ;
If we call this script from PowerShell using the invoke-sqlcmd cmdlet, we can use the method demonstrated in Listing 4-1 to call the script against all servers in the enterprise. We can then schedule the PowerShell script to run at regular intervals, using SQL Server Agent, Windows Scheduled Tasks, or a third-party enterprise-scheduling tool, and save the results to a management database, from which we can later analyze the results and identify suitable candidates for a Buffer Cache Extension or targets for consolidation.
Query Store Metadata
The query store , introduced in SQL Server 2016, is a very exciting new feature for the serious DBA. It provides an interactive dashboard, which allows users to easily pinpoint problem queries, including those that have regressed in performance. It also stores historic query plans, which means that you can force a better performing query plan to be used, in the event of regression. This significantly reduces testing and remediation effort when deploying a new version of SQL Server or a new version of an application.
There are also great possibilities for DBAs who want to introduce or enhance their automation offering, as all query store data is also exposed through a set of catalog views. The following sections will discuss how this metadata can be used to remove unwanted ad hoc execution plans and how you can identify expensive queries across your estate.
Remove Ad Hoc Query Plans
Ad hoc query plans consume memory and can be of limited use. It is a good idea to remove ad hoc query plans if they are not being recused. The query in Listing 4-3 demonstrates how query store metadata can be used to clear unwanted ad hoc query plans from the cache.
The first part of the script runs a query against the sys.databases catalog view, to return a list of databases on the instance. The query store can be enabled or disabled at the database level and cannot be enabled on either master or TempDB system databases. Therefore, the query filters out databases with a database_id of less than three and any databases where query store is disabled.
Tip
The output is piped into a select-object command , because, otherwise, PowerShell will treat each result as an object, as opposed to a string, which will result in the second part of the script attempting to set the database context to System.Data.DataRow, which, of course, will result in an error.
The second part of the script passes the database names into a foreach loop and executes an SQL script against each one. This technique provides a useful mechanism for running the same command against every database in an instance, and I would highly recommend this approach over the undocumented sp_MSForeachdb system stored procedure, which can be unreliable. We could also scale this script to run across all instances in the enterprise, by using the technique shown in Listing 4-3.
Listing 4-3. Removing Ad Hoc Query Plans
clear-host
import-module sqlps
$databases = @(invoke-sqlcmd -Query "SELECT name
FROM sys.databases
WHERE database_id > 2
AND is_query_store_on = 1 ;") | select-object -expand Name
foreach ($database in $databases)
{
Invoke-Sqlcmd -Database $database -Query "DECLARE @SQL NVARCHAR(MAX)
SELECT @SQL =
(
SELECT 'EXEC sp_query_store_remove_query ' + CAST(qsq.query_id AS NVARCHAR(6)) + ';' AS [data()]
FROM sys.query_store_query_text AS qsqt
JOIN sys.query_store_query AS qsq
ON qsq.query_text_id = qsqt.query_text_id
JOIN sys.query_store_plan AS qsp
ON qsp.query_id = qsq.query_id
JOIN sys.query_store_runtime_stats AS qsrs
ON qsrs.plan_id = qsp.plan_id
GROUP BY qsq.query_id
HAVING SUM(qsrs.count_executions) = 1
AND MAX(qsrs.last_execution_time) < DATEADD (HH, -24, GETUTCDATE())
ORDER BY qsq.query_id
FOR XML PATH('')
) ;
EXEC(@SQL) ;" }
The query that is run against each database uses the data() method approach to iteration, to build a string that can be executed using dynamic SQL, before finally executing the script, using the EXEC statement. The metadata objects used by the query are described in the following sections.
sys.query_store_query_text
The sys.query_store_query_text catalog view exposes the text and handle of each T-SQL statement in the store. It returns the columns detailed in Table 4-6.
Table 4-6. Columns Returned by sys.query_store_query_text
Column | Description |
---|---|
query_text_id | The primary key |
query_sql_text | The T-SQL text of the query |
statement_sql_handle | The SQL handle of the query |
is_part_of_encrypted_module | The SQL text that is part of an encrypted programmable object |
has_restricted_text | The SQL text that includes sensitive information that cannot be displayed, such as a password |
sys.query_store_query
The sys.query_store_querycatalog view joins to the sys.query_store_query_text catalog view on the query_text_id column in each view. It also joins to sys.query_context_settings on the context_settings_id column in each object. sys.query_store_query returns details of the queries, including aggregated statistics. The columns returned are detailed in Table 4-7.
Table 4-7. Columns Returned by sys.query_store_query
Column | Description |
---|---|
query_id | The primary key |
query_text_id | Foreign key that joins to sys.query_store_query_text |
context_settings_id | Foreign key that joins to sys.query_context_settings |
object_id | The object ID of the programmable object, of which the query is a part. 0 represents ad hoc SQL. |
batch_sql_handle | The ID of the batch of which the query is a part. This field is only populated when the query uses table variables or temp tables. |
query_hash | The query tree represented as an MD5 hash value |
is_internal_query | Indicates if the query is a user or internal query: • 0 indicates a user query • 1 indicates an internal query |
query_parameterization_type | The parameterization used for the query: • 0 indicates none • 1 indicates user parameterization • 2 indicates simple parameterization • 3 indicates forced parameterization |
query_parameterization_type_desc | Description of the parameterization type |
initial_compile_start_time | The date and time that the query was compiled for the first time |
last_compile_start_time | The data and time of the most recent compilation of the query |
last_execution_time | The date and time of the most recent execution of the query |
last_compile_batch_sql_handle | The handle of SQL batch of which the query was a part on the most recent occurrence of the query being executed |
last_compile_batch_offset_start | The offset of the start of the query from the beginning of the batch that is represented by last_compile_batch_sql_handle |
last_compile_batch_offset_end | The offset of the end of the query from the beginning of the batch that is represented by last_compile_batch_sql_handle |
count_compiles | A count of how many times that the query has been compiled |
avg_compile_duration | The average length of time that it has taken the query to compile, recorded in milliseconds |
last_compile_duration | The duration of the most recent compilation of the query, recorded in milliseconds |
avg_bind_duration | The average time it takes to bind the query. Binding is the process of resolving every table and column name to an object in the system catalog and creating an algebrization tree. |
last_bind_duration | The time it took to bind the query on its most recent execution |
avg_bind_cpu_time | The average CPU time that has been required to bind the query |
last_bind_cpu_time | The amount of CPU time required to bind the query on its most recent compilation |
avg_optimize_duration | The average length of time that the query optimizer takes to generate candidate query plans and select the most efficient |
last_optimize_duration | The time it took to optimize the query during its most recent compilation |
avg_optimize_cpu_time | The average CPU time that has been required to optimize the query |
last_optimize_cpu_time | The amount of CPU time that was required to optimize the query on its most recent compilation |
avg_compile_memory_kb | The average amount of memory used to compile the query |
last_compile_memory_kb | The amount of memory used to compile the query during its most recent compilation |
max_compile_memory_kb | The largest amount of memory that has been required on any compilation of the query |
is_clouddb_internal_query | Indicates if the query is a user query or generated internally. This column only applies to the Azure SQL Database. It will always return 0 if the query is not run against the Azure SQL Database. When run on the Azure SQL Database: • 0 indicates a user query • 1 indicates an internal query |
sys.query_store_plan
The sys.query_store_plan catalog view exposes details about the execution plans. The view joins to sys.query_store_query catalog view on the query_id column in each table. The columns returned by this view are detailed in Table 4-8.
Table 4-8. Columns Returned by sys.dm_ query_store_plan
Column | Description |
---|---|
plan_id | The primary key |
query_id | Foreign key that joins to sys.query_store_query |
plan_group_id | The ID of the plan group that the plan is part of. This is only applicable to queries involving a cursor, as they require multiple plans. |
engine_version | The version of the database engine that was used to compile the plan |
compatibility_level | The compatibility level of the database that the query references |
query_plan_hash | The plan represented in an MD5 hash |
query_plan | The XML representation of the query plan |
is_online_index_plan | Indicates if the plan was used during an online index rebuild: • 0 indicates that is was not • 1 indicates that it was |
is_trivial_plan | Indicates if the optimizer regarded the plan as trivial: • 0 indicates that it is not a trivial plan • 1 indicates that it is a trivial plan |
is_parallel_plan | Indicates if the plan is parellelised. • 0 indicates that it is not a parellel plan • 1 indicates that it is a parellel plan |
is_forced_plan | Indicates if the plan is forced: • 0 indicates it is not forced • 1 indicates that it is forced |
is_natively_compiled | Indicates if the plan includes natively compiled stored procedures: • 0 indicates that it does not • 1 indicates that it does |
force_failure_count | Indicates the number of times an attempt to force the plan has failed |
last_force_failure_reason | The reason for the failure in forcing the plan. Further details can be found in Table 4-9. |
last_force_failure_reason_desc | Text describing the reason for the failure in forcing the plan. Further details can be found in Table 4-9. |
count_compiles | A count of how many times that the query has been compiled |
initial_compile_start_time | The date and time that the query was compiled for the first time |
last_compile_start_time | The data and time of the most recent compilation of the query |
last_execution_time | The date and time of the most recent execution of the query |
avg_compile_duration | The average length of time that it has taken the query to compile, recorded in milliseconds |
last_compile_duration | The duration of the most recent compilation of the query, recorded in milliseconds |
If plan forcing fails, then last_force_failure_reason and last_force_failure_reason_desc columns describe the reasons for the failure. Table 4-9 defines the mapping between codes and reasons.
Table 4-9. Plan Forcing Failure Reasons
Reason Code | Reason Text | Description |
---|---|---|
0 | N/A | No failure |
8637 | ONLINE_INDEX_BUILD | The query attempted to update a table while an index on the table was being rebuilt |
8683 | INVALID_STARJOIN | The start join included in the plan is invalid |
8684 | TIME_OUT | While searching for the force plan, the optimizer exceeded the threshold for allowed operations. |
8689 | NO_DB | The plan references a database that does not exist. |
8690 | HINT_CONFLICT | The plan conflicts with a query hint. For example, FORCE_INDEX(0) is used on a table with a clustered index. |
8694 | DQ_NO_FORCING_SUPPORTED | The plan conflicts with the use of full-text queries or distributed queries. |
8698 | NO_PLAN | Either the optimizer could not produce the forced plan, or it could not be verified. |
8712 | NO_INDEX | An index used by the plan does not exist. |
8713 | VIEW_COMPILE_FAILED | There is an issue with an indexed view that the plan uses. |
Other values | GENERAL_FAILURE | An error not covered by the other codes |
sys.query_store_runtime_stats
The sys.query_store_runtime_stats catalog view exposes the aggregated runtime statistics for each query plan. The view joins to sys_query_store_plan using the plan_id column in each table and joins to the sys.query_store_runtime_stats_interval catalog view, using the runtime_stats_interval_id column in each object. The columns returned by the catalog view are described in Table 4-10.
Table 4-10. Columns Returned by sys.query_store_runtime_stats
Column | Description |
---|---|
runtime_stats_id | The primary key |
plan_id | Foreign key, joining to sys.query_store_plan |
runtime_stats_interval_id | Foreign key, joining to store_runtime_stats_interval |
execution_type | Specifies the execution status: • 0 indicates a regular execution that finished successfully • 1 indicates the client aborted the execution • 2 indicates that the execution was aborted because of an exception |
execution_type_desc | A textual description of the execution_type column |
first_execution_time | The date and time that the plan was first executed |
last_execution_time | The date and time of the most recent execution of the plan |
count_executions | The number of times that the plan has been executed |
avg_duration | The average length of time that the query has taken to execute, expressed in microseconds |
last_duration | The time that the query took to execute on its most recent execution, expressed in microseconds |
min_duration | The shortest length of time that the plan has taken to execute, expressed in microseconds |
max_duration | The longest amount of time that the plan has taken to execute, expressed in microseconds |
stdev_duration | The standard deviation of time spent executing of the plan, expressed in microseconds |
avg_cpu_time | The average amount of CPU time used by executions of the plan, expressed in microseconds |
last_cpu_time | The amount of CPU time used by the most recent execution of the plan, expressed in microseconds |
min_cpu_time | The minimum amount of CPU time used by any execution of the plan, expressed in microseconds |
max_cpu_time | The maximum amount of CPU time used by any execution of the plan, expressed in microseconds |
stdev_cpu_time | The standard deviation of CPU time used by executions of the plan, expressed in microseconds |
avg_logical_io_reads | The average number of logical reads required by the plan |
last_logical_io_reads | The number of logical reads required by the most recent execution of the plan |
min_logical_io_reads | The smallest number of logical reads required by any execution of the plan |
max_logical_io_reads | The maximum number of logical reads required by any execution of the plan |
stdev_logical_io_reads | The standard deviation of logical reads required by executions of the plan |
avg_logical_io_writes | The average number of logical writes required by the plan |
last_logical_io_writes | The number of logical writes required by the most recent execution of the plan |
min_logical_io_writes | The smallest number of logical writes required by any execution of the plan |
max_logical_io_writes | The maximum number of logical writes required by any execution of the plan |
stdev_logical_io_writes | The standard deviation of logical writes required by executions of the plan |
avg_physical_io_reads | The average number of reads from disk required by the plan |
last_physical_io_reads | The number of reads from disk required by the most recent execution of the plan |
min_physical_io_reads | The smallest number of reads from disk required by any execution of the plan |
max_physical_io_reads | The maximum number of reads from disk required by any execution of the plan |
stdev_physical_io_reads | The standard deviation of reads from disk required by executions of the plan |
avg_clr_time | The average CLR (Common Language Runtime) time required by executions of the plan, expressed in microseconds |
last_clr_time | The CLR time required on the most recent execution of the plan, expressed in microseconds |
min_clr_time | The smallest amount of CLR time required by any execution of the plan., expressed in microseconds |
max_clr_time | The longest CLR time spent by any execution of the plan, expressed in microseconds |
stdev_clr_time | The standard deviation of CLR time required by executions of the plan, expressed in microseconds |
avg_dop | The average degree of parallelism used by execution of the plan |
last_dop | The degree of parallelism used on the most recent execution of the plan |
min_dop | The lowest degree of parallelism used by any execution of the plan |
max_dop | The largest degree of parallelism used by any execution of the plan |
stdev_dop | The standard deviation of the degree of parallelism used by executions of the plan |
avg_query_max_used_memory | The average number of memory grants issued for any execution of the plan, expressed as a count of 8KB pages* |
last_query_max_used_memory | The number of memory grants issued during the most recent execution of the plan, expressed as a count of 8KB pages* |
min_query_max_used_memory | The smallest number of memory grants issued for any execution of the plan, expressed as a count of 8KB pages* |
max_query_max_used_memory | The largest amount of memory grants issued for any execution of the plan, expressed as a count of 8KB pages* |
stdev_query_max_used_memory | The standard deviation of memory grants issued during the executions of the plan, expressed as a count of 8KB pages* |
avg_rowcount | The average number of rows affected by executions of the plan |
last_rowcount | The number of rows affected by the most recent execution of the plan |
min_rowcount | The minimum number of rows affected by any execution of the plan |
max_rowcount | The largest number of rows affected by any execution of the plan |
stdev_rowcount | The standard deviation of rows affected, across executions of the plan |
Identifying the Most Expensive Queries
Imagine a scenario in which you have a private cloud infrastructure that is supporting a number of data-tier applications. The Hyper-V farm supporting the private cloud is coming under generalized CPU pressure, and you want to identify the most expensive queries across the whole environment, in order to focus a query-tuning effort. This data is at hand, through query store metadata.
The script in Listing 4-4 demonstrates how you can retrieve the five most expensive queries and plans in the current database, based on the average CPU time required to execute them. The query filters out any queries that have not been executed in the previous seven days, because if the queries are being run infrequently, you are probably not going to be very interested in them.
Listing 4-4. Retrieving the Most Expensive Queries
SELECT TOP 5
qsqt.query_sql_text
,qsp.query_plan
,CAST(AVG(qsrs.avg_duration) AS NUMERIC(10,2)) Avg_Duration
,CAST(AVG(qsrs.avg_cpu_time) AS NUMERIC(10,2)) Avg_CPU
,CAST((AVG(qsrs.avg_cpu_time) / AVG(qsrs.avg_duration)) * 100 AS NUMERIC(5,2)) Avg_CPU_Percent
FROM sys.query_store_query_text AS qsqt
JOIN sys.query_store_query AS qsq
ON qsq.query_text_id = qsqt.query_text_id
JOIN sys.query_store_plan AS qsp
ON qsp.query_id = qsq.query_id
JOIN sys.query_store_runtime_stats AS qsrs
ON qsrs.plan_id = qsp.plan_id
WHERE qsq.is_internal_query = 0
AND qsp.last_execution_time >= GETUTCDATE() - 7
GROUP BY qsqt.query_sql_text
,qsp.query_plan
ORDER BY AVG(qsrs.avg_cpu_time) ;
We can use the technique that you learned in Listing 4-2 to iterate through each database on the instance and the technique that you learned in Listing 4-1 to iterate through every instance in the enterprise. We will achieve this using a nested foreach loop. The final PowerShell script can be found in Listing 4-5. You will notice that we have added an additional two columns, so that the instance and database can easily be identified in the combined results. We are also piping the results to a file called SQLResults.txt.
Listing 4-5. Finding the Most Expensive Queries in the Enterprise
import-module sqlps
[array]$Servers = "ESASSMGMT1", "ESPROD1", "ESPROD2"
foreach ($Server in $Servers)
{
$databases = @(invoke-sqlcmd -serverinstance $server -Query "SELECT name
FROM sys.databases
WHERE database_id > 2
AND is_query_store_on = 1") | select-object -expand Name
foreach ($database in $databases)
{
Invoke-Sqlcmd -serverinstance $server -Database $database -Query "SELECT TOP 5
@@SERVERNAME AS ServerName
,DB_NAME() AS DatabaseName
,qsqt.query_sql_text
,qsp.query_plan
,CAST(AVG(qsrs.avg_duration) AS NUMERIC(10,2)) Avg_Duration
,CAST(AVG(qsrs.avg_cpu_time) AS NUMERIC(10,2)) Avg_CPU
,CAST((AVG(qsrs.avg_cpu_time) / AVG(qsrs.avg_duration)) * 100 AS NUMERIC(5,2)) Avg_CPU_Percent
FROM sys.query_store_query_text AS qsqt
JOIN sys.query_store_query AS qsq
ON qsq.query_text_id = qsqt.query_text_id
JOIN sys.query_store_plan AS qsp
ON qsp.query_id = qsq.query_id
JOIN sys.query_store_runtime_stats AS qsrs
ON qsrs.plan_id = qsp.plan_id
WHERE qsq.is_internal_query = 0
AND qsp.last_execution_time >= GETUTCDATE() - 7
GROUP BY qsqt.query_sql_text
,qsp.query_plan
ORDER BY AVG(qsrs.avg_cpu_time) ;" > C:SQLResults.txt
}
}
Tip
As well as assisting in the diagnosis and remediation of issues such as CPU pressure in a private cloud, you may also want to consider setting up a weekly report to identify the most expensive queries across the enterprise. It is this kind of proactive administration that will allow you to troubleshoot issues before the business even knows about them and offer an outstanding platform service.
Dynamic Index Rebuilds
SQL Server does not provide out-of-the-box functionality for intelligently rebuilding indexes . If you use a Maintenance Plan to rebuild or reorganize all indexes in your database, every index will be rebuilt or reorganized, whether it needs it or not. Unfortunately, this can be a resource-intensive and time-consuming process, and it may be that only a fraction of your indexes need to be rebuilt.
To work around this issue, you can use index-related metadata to evaluate the fragmentation level in each index and take the appropriate action (rebuild, reorganize, or leave alone). To achieve this, we will use the sys.dm_db_index_physical_statsdynamic management function. This function returns fragmentation statistics for indexes, with one row returned for every level of every index within scope (based on the input parameters). The parameters required to call this function are detailed in Table 4-11.
Table 4-11. Parameters Required by sys.dm_db_index_physical_stats
Parameter | Description |
---|---|
Database_ID | The ID of the database that the function will run against. If you do not know it, you can pass in DB_ID('DatabaseName'). |
Object_ID | The Object ID of the table that you want to run the function against. If you do not know it, pass in OBJECT_ID('TableName'). Pass NULL to run the function against all tables in the database. |
Index_ID | The index ID of the index you want to run the function against. 1 is always the ID of a table’s clustered index. Pass NULL to run the function against all indexes on the table. |
Partition_Number | The ID of the partition that you want to run the function against. Pass NULL if you want to run the function against all partitions, or if the table is not partitioned. |
Mode | Acceptable values are LIMITED, SAMPLED, or DETAILED. • LIMITED only scans the non-leaf levels of an index. • SAMPLED scans 1% of pages in the table, unless the table or partition has 10,000 pages or less, in which case DETAILED mode is used. • DETAILED mode scans 100% of the pages in the table. For very large tables, SAMPLED is often preferred, due to the length of time it can take to return data in DETAILED mode. |
Tip
This book assumes that you are familiar with B-Tree indexes and fragmentation (both internal and external). If you require a refresher, full details can be found in Apress Pro SQL Server Administration, which can be purchased at www.apress.com/9781484207116?gtmf=s .
The columns returned by sys.dm_db_index_physical_stats are detailed in Table 4-12.
Table 4-12. Columns Returned by sys.dm_db_index_physical_stats
Column | Description |
---|---|
database_id | The database ID of the database, in which the index resides |
object_id | The object ID of the table or index view to which the index is associated |
index_id | The ID of the index within the table or indexed view. The ID is only unique within its associated parent object. 0 is always the index ID of a heap, and 1 is always the index ID of a clustered index. |
partition_number | The partition number. This always returns 1 for non-partitioned tables and indexed views. |
index_type_desc | Text defining the index types. Possible return values are as follows: • HEAP • CLUSTERED INDEX • NONCLUSTERED INDEX • PRIMARY XML INDEX • SPATIAL INDEX • XML INDEX (Referring to a secondary XML index) There are also some index types used for internal use only. These are as follows: • COLUMNSTORE MAPPING INDEX • COLUMNSTORE DELETE BUFFER INDEX • COLUMNSTORE DELETE BITMAP INDEX |
hobt_id | The ID of the heap or B-Tree |
alloc_unit_type_desc | Text describing the type of location unit that the index level resides in |
index_depth | The number of levels that the index consists of. If 1 is returned, the index is either a heap or LOB data or row overflow data. |
index_level | The level of the index that is represented by the row returned |
avg_fragmentation_in_percent | The average percentage of external fragmentation within the index level (lower is better) |
fragment_count | The number of fragments within the index level |
avg_fragment_size_in_pages | The average size of each fragment, represented as a count of 8KB pages |
page_count | For in-row data, the page count represents the number of pages at the level of the index that is represented by the row. For a heap, the page count represents the number of pages within the IN_ROW_DATA allocation unit. For LOB or row overflow data, the page count represents the number of pages within the allocation unit. |
avg_page_space_used_in_percent | The average amount of internal fragmentation within the index level that the row represents (higher is better…usually) |
record_count | For in-row data, the record count represents the number of records at the level of the index that is represented by the row. For a heap, the record count represents the number of records within the IN_ROW_DATA allocation unit. For LOB or row overflow data, the record count represents the number of records within the allocation unit. |
ghost_record_count | A count of logically deleted rows that are awaiting physical deletion by the ghost cleanup task |
version_ghost_record_count | The number of rows that have been retained by a transaction using the snapshot isolation level |
min_record_size_in_bytes | The size of the smallest record, expressed in bytes |
max_record_size_in_bytes | The size of the largest record, expressed in bytes |
avg_record_size_in_bytes | The average record size, expressed in bytes |
forwarded_record_count | The number of records that have forwarding pointers to other locations. Only applies to heaps. If the object is an index, NULL is returned. |
compressed_page_count | A count of the compressed pages within the index level represented by the row |
hobt_id | New in SQL Server 2016, this column only applies to columnstore indexes. It represents the ID of the heaps or B-Trees that are used to track internal columnstore data. |
column_store_delete_buffer_state | New in SQL Server 2016, this column only applies to columnstore indexes. It represents the state of the columnstore delete buffer. |
column_store_delete_buff_state_desc | New in SQL Server 2016, this column only applies to columnstore indexes. It is a textual description of the value returned in the column_store_delete_buffer_state column. |
The script in Listing 4-6 demonstrates how you can use this dynamic management function to determine which indexes have more than 25% external fragmentation and rebuild only those indexes.
Listing 4-6. Dynamic Index Rebuilds
clear-host
import-module sqlps
[array]$Servers = "ESASSMGMT1", "ESPROD1", "ESPROD2"
foreach ($Server in $Servers)
{
$databases = @(invoke-sqlcmd -serverinstance $server -Query "SELECT name
FROM sys.databases
WHERE database_id > 4 ;") | select-object -expand Name
foreach ($database in $databases)
{
Invoke-Sqlcmd -serverinstance $server -Database $database -QueryTimeout 7200 -Query "
DECLARE @SQL NVARCHAR(MAX)
SET @SQL =
(
SELECT 'ALTER INDEX '
+ i.name
+ ' ON ' + s.name
+ '.'
+ OBJECT_NAME(i.object_id)
+ ' REBUILD ; '
FROM sys.dm_db_index_physical_stats(DB_ID(),NULL,NULL,NULL,'DETAILED') ps
INNER JOIN sys.indexes i
ON ps.object_id = i.object_id
AND ps.index_id = i.index_id
INNER JOIN sys.objects o
ON ps.object_id = o.object_id
INNER JOIN sys.schemas s
ON o.schema_id = s.schema_id
WHERE index_level = 0
AND avg_fragmentation_in_percent > 25
FOR XML PATH('')
) ;
EXEC(@SQL) ;"
}
}
This script uses the XML data() method technique to build and execute a string that contains ALTER INDEX statements for every index in the specified database that has more than 25% external fragmentation at the leaf level. The script joins sys.dm_db_index_physical_stats to sys.schemas to obtain the name of the schema that contains the index. There is no direct join between these objects, so the join is implemented through an intermediate catalog view (sys.objects). The script also joins to sys.indexs to retrieve the object_id, which is then passed into the OBJECT_NAME() function, to determine the name of the index.
Enforcing Policies
Policy-Based Management (PBM) is a great tool for ensuring that your policies are met across the Enterprise. It does have limitations, however. Specifically, some facets only allow you to report on policy violation but not actually to prevent the change that will cause the violation. Therefore, many DBAs will choose to enforce some of their key policies by using custom scripting.
xp_cmdshellis a system-stored procedure that allows you to run operating system commands from within an SQL Server instance. There seems to be some debate between DBAs about whether this is a security hole or not, but I have always found it cut and dried. The argument for xp_cmdshell not being a security risk generally revolves around “Only my team and I are allowed to use it.” But what if the instance is compromised by a malicious user? In this case, you really do not want to reduce the number of steps hackers have to take before they can start spreading their attack out to the network.
PBM can report on xp_cmdshell, but it cannot prevent it being enabled. You can, however, create a PowerShell script that will check for xp_cmdshell being enabled and disable it, if required. The script can then be scheduled to run on a regular basis.
The script in Listing 4-7 demonstrates how to use the techniques that I have already discussed in this chapter to loop around every instance and disable xp_cmdshell, if required. The script discovers if the feature is incorrectly configured, by checking its status in the sys.configurations catalog view. It will then use sp_configure to reconfigure the setting. There are two calls of sp_configure, because xp_cmdshell is an advanced option, and the show advanced options property must be configured before the setting can be changed. The RECONFIGURE command forces the change.
There are two commands to reconfigure the instance, following a change made by sp_configure. The first is RECONFIGURE. The second is RECONFIGURE WITH OVERRIDE. RECONFIGURE, will change the running value of the setting, as long as the newly configured value is regarded as “sensible” by SQL Server. For example, RECONFIGURE will not allow you to disable contained databases when they exist on the instance. If you use RECONFIGURE WITH OVERRIDE, however, this action will be allowed, even though your contained databases will no longer be accessible. Even with this command, however, SQL Server will still run checks to ensure that the value you have entered is between the allowable minimum and maximum values for the setting. It will also not allow you to perform any operations that will cause fatal errors in the database engine. As an example of this, the procedure will not allow you to configure the Min Server Memory (MB) setting to be higher than the Max Server Memory (MB) setting.
Listing 4-7. Disable xp_cmdshell Throughout the Enterprise
import-module sqlps
[array]$Servers = "ESASSMGMT1", "ESPROD1", "ESPROD2"
foreach ($Server in $Servers)
{
invoke-sqlcmd -ServerInstance "$Server" -Query "IF
(
--Check if xp_cmdshell is enabled
SELECT value_in_use
FROM sys.configurations
WHERE name = 'xp_cmdshell'
) = 1
BEGIN
--Turn on advanced options
EXEC sp_configure 'show advanced options', 1 ;
RECONFIGURE
--Turn off xp_cmdshell
EXEC sp_configure 'xp_cmdshell', 0 ;
RECONFIGURE
END"
}
This approach can be used to configure many aspects of your enterprise’s surface area. In SQL Server 2016, the features that can be configured with this methodology are detailed in Table 4-13.
Table 4-13. Settings That Can Be Changed with sp_configure
Setting | Description |
---|---|
recovery interval (min) | Maximum recovery interval in minutes |
allow updates | Allows updates to system tables |
user connections | Number of user connections allowed |
locks | Number of locks for all users |
open objects | Number of open database objects |
fill factor (%) | Default fill factor percentage |
disallow results from triggers | Disallows returning results from triggers |
nested triggers | Allows triggers to be invoked within triggers |
server trigger recursion | Allows recursion for server level triggers |
remote access | Allows remote access |
default language | Default language |
cross db ownership chaining | Allows cross db ownership chaining |
max worker threads | Maximum worker threads |
network packet size (B) | Network packet size |
show advanced options | Shows advanced options |
remote proc trans | Creates DTC transaction for remote procedures |
c2 audit mode | c2 audit mode |
default full-text language | Default full-text language |
two digit year cutoff | Two-digit year cutoff |
index create memory (KB) | Memory for index create sorts (KBytes) |
priority boost | Priority boost |
remote login timeout (s) | Remote login time-out |
remote query timeout (s) | Remote query time-out |
cursor threshold | Cursor threshold |
set working set size | Sets working set size |
user options | User options |
affinity mask | Affinity mask |
max text repl size (B) | Maximum size of a text field in replication |
media retention | Tape retention period in days |
cost threshold for parallelism | Cost threshold for parallelism |
max degree of parallelism | Maximum degree of parallelism |
min memory per query (KB) | Minimum memory per query (KBytes) |
query wait (s) | Maximum time to wait for query memory (s) |
min server memory (MB) | Minimum size of server memory (MB) |
max server memory (MB) | Maximum size of server memory (MB) |
query governor cost limit | Maximum estimated cost allowed by query governor |
lightweight pooling | User mode scheduler uses lightweight pooling |
scan for startup procs | Scan for startup stored procedures |
affinity64 mask | Affinity64 mask |
affinity I/O mask | Affinity I/O mask |
affinity64 I/O mask | Affinity64 I/O mask |
transform noise words | Transforms noise words for full-text query |
precompute rank | Uses precomputed rank for full-text query |
PH timeout (s) | DB connection time-out for full-text protocol handler (s) |
clr enabled | CLR user code execution enabled in the server |
max full-text crawl range | Maximum crawl ranges allowed in full-text indexing |
ft notify bandwidth (min) | Number of reserved full-text notifications buffers |
ft notify bandwidth (max) | Maximum number of full-text notifications buffers |
ft crawl bandwidth (min) | Number of reserved full-text crawl buffers |
ft crawl bandwidth (max) | Maximum number of full-text crawl buffers |
default trace enabled | Enables or disables the default trace |
blocked process threshold (s) | Blocked process reporting threshold |
in-doubt xact resolution | Recovery policy for DTC transactions with unknown outcome |
remote admin connections | Dedicated Admin Connections are allowed from remote clients |
common criteria compliance enabled | Common Criteria compliance mode enabled |
EKM provider enabled | Enables or disables EKM provider |
backup compression default | Enables compression of backups by default |
filestream access level | Sets the FILESTREAM access level |
optimize for ad hoc workloads | When this option is set, plan cache size is further reduced for a single-use ad hoc OLTP workload. |
access check cache bucket count | Defaults hash bucket count for the access check result security cache |
access check cache quota | Defaults quota for the access check result security cache |
backup checksum default | Enables checksum of backups by default |
automatic soft-NUMA disabled | Automatic soft-NUMA is enabled by default |
external scripts enabled | Allows execution of external scripts |
Agent XPs | Enables or disables Agent XPs |
Database Mail XPs | Enables or disables Database Mail XPs |
SMO and DMO XPs | Enables or disables SMO and DMO XPs |
Ole Automation Procedures | Enables or disables Ole Automation Procedures |
xp_cmdshell | Enables or disables command shell |
Ad Hoc Distributed Queries | Enables or disables Ad Hoc Distributed Queries |
Replication XPs | Enables or disables Replication XPs |
contained database authentication | Enables contained databases and contained authentication |
hadoop connectivity | Configures SQL Server to connect to external Hadoop or Microsoft Azure storage blob data sources through PolyBase |
polybase network encryption | Configures SQL Server to encrypt control and data channels when using PolyBase |
remote data archive | Allows the use of the REMOTE_DATA_ARCHIVE data access for databases |
polybase mode | Configures the mode this node should operate in when using the PolyBase feature |
The sys.configurations catalog view returns the columns detailed in Table 4-14.
Table 4-14. Columns Returned by sys.configurations
Column | Description |
---|---|
configuration_id | The primary key |
name | The name of the setting |
value | The currently configured value of the setting |
minimum | The minimum value that can be configured for the setting |
maximum | The maximum value that can be configured for the setting |
value_in_use | The value that is currently in use. This can differ from the value column, if the setting has been changed but the instance has not been reconfigured or restarted. |
description | A description of the setting |
is_dynamic | Specifies if the setting is dynamic: • 1 indicates that the setting can be applied using RECONFIGURE • 0 indicates that the instance must be restarted |
is_advanced | Specifies if the setting can only be changed when show advanced options is set to 1: • 0 indicates it is not an advanced option • 1 indicates that it is an advanced option |
The parameters accepted by sp_configure are detailed in Table 4-15.
Table 4-15. Parameters Accepted by sp_configure
Parameter | Description |
---|---|
@configname | The name of the setting to change |
@configvalue | The value to be configured for the setting |
Analyzing Performance Counters
In many environments, DBAs are not allowed access to the Operating System and become heavily reliant on the Windows Administrators to use tools such as Perfmon. What many DBAs are unaware of is that all Perfmon counters are actually exposed at the SQL Server level, which allows you to work around this issue and reduce the time to resolution for many performance issues experienced by the business.
These performance counters are exposed through the sys.dm_os_performance_countersDMV. The columns returned by this DMV are detailed in Table 4-16.
Table 4-16. Columns Returned by sys.dm_os_performance_counters
Column | Description |
---|---|
object_name | The category of the counter |
counter_name | The name of the counter |
instance_name | The instance of the counter. For example, database-related counters have an instance for each database. |
cntr_value | The value of the counter |
cntr_type | The type of counter. Counter types are described in Table 4-17. |
The cntr_type column informs you of the type of counter. There are size types of counter exposed by the DMV, and each type is detailed in Table 4-17.
Table 4-17. Counter Type
Counter Type | Description |
---|---|
1073939712 | Use PERF_LARGE_RAW_BASE as a base value in conjunction with the PERF_LARGE_RAW_FRACTION type to calculate a counter percentage or with PERF_AVERAGE_BULK to calculate an average. |
537003264 | Use PERF_LARGE_RAW_FRACTION as a fractional value in conjunction with PERF_LARGE_RAW_BASE to calculate a counter percentage. |
1073874176 | PERF_AVERAGE_BULK is a cumulative average that you use in conjunction with PERF_LARGE_RAW_BASE to calculate a counter average. The counter, along with the base, is sampled twice, to calculate the metric over a period of time. |
272696320 | PERF_COUNTER_COUNTER is a 32-bit cumulative rate counter. The value should be sampled twice, to calculate the metric over a period of time. |
272696576 | PERF_COUNTER_BULK_COUNT is a 64-bit cumulative rate counter. The value should be sampled twice, to calculate the metric over a period of time. |
65792 | PERF_COUNTER_LARGE_RAWCOUNT returns the last sampled result for the counter. |
The query in Listing 4-8 demonstrates how to capture metrics of the type PERF_COUNTER_LARGE_RAWCOUNT, which is the simplest form of counter. The query returns the number of memory grants that are currently pending.
Listing 4-8. Using Counter Type 65792
SELECT *
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Memory Grants Pending' ;
The script in Listing 4-9 demonstrates capturing metrics of type PERF_COUNTER_BULK_COUNTand PERF_COUNTER_COUNTER. The script records the number of lock requests that are occurring, by capturing the value twice, with a one-minute interval. It then subtracts the first value from the second value and divides by 60, to obtain the rate per second.
Listing 4-9. Using Counter Types 272696576 and 272696320
DECLARE @cntr_value1 BIGINT = (
SELECT cntr_value
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Lock Requests/sec'
AND instance_name = '_Total') ;
WAITFOR DELAY '00:01:00'
DECLARE @cntr_value2 BIGINT = (
SELECT cntr_value
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Lock Requests/sec'
AND instance_name = '_Total') ;
SELECT (@cntr_value2 - @cntr_value1) / 60 'Lock Requests/sec' ;
The script in Listing 4-10 demonstrates how to capture a counter of type 537003264. The plan cache hit ratio counter is of this counter. As it is a percentage value, in order to extract a meaningful value, we must multiply the value by 100 and divide by the base counter, to calculate the percentage.
Tip
If you are running the script against a named instance, you will have to replace SQLServer: with MSSQL$InstanceName:.
Listing 4-10. Using Counter Type 537003264
SELECT
100 *
(
SELECT cntr_value
FROM sys.dm_os_performance_counters
WHERE object_name = 'SQLServer:Plan Cache'
AND counter_name = 'Cache hit ratio'
AND instance_name = '_Total')
/
(
SELECT cntr_value
FROM sys.dm_os_performance_counters
WHERE object_name = 'SQLServer:Plan Cache'
AND counter_name = 'Cache hit ratio base'
AND instance_name = '_Total') [Plan cache hit ratio %] ;
The PERF_AVERAGE_BULKcounter type is the most complex to use. We will have to capture the value and its corresponding base counter twice. We must then deduct the first capture of the counter from the second capture, deduct the first capture of the base counter from the second capture, and then divide the fractional counter value by its base value, to calculate the average over the time period.
The script in Listing 4-11 demonstrates how to achieve this for the Average Latch Wait Time (ms) counter. Because it is possible that no latches will be requested within the time period, we have wrapped the SELECT statement in an IF…ELSE block , to avoid the possibility of a divide-by-0 error being thrown.
Listing 4-11. Using Counter Type 1073874176
DECLARE @cntr TABLE
(
ID INT IDENTITY,
counter_name NVARCHAR(256),
counter_value BIGINT,
[Time] DATETIME
) ;
INSERT INTO @cntr
SELECT
counter_name
,cntr_value
,GETDATE()
FROM sys.dm_os_performance_counters
WHERE counter_name IN('Average Latch Wait Time (ms)',
'Average Latch Wait Time base') ;
WAITFOR DELAY '00:01:00' ;
INSERT INTO @cntr
SELECT
counter_name
,cntr_value
,GETDATE()
FROM sys.dm_os_performance_counters
WHERE counter_name IN('Average Latch Wait Time (ms)',
'Average Latch Wait Time base') ;
IF (SELECT COUNT(DISTINCT counter_value)
FROM @cntr
WHERE counter_name = 'Average Latch Wait Time (ms)') > 2
BEGIN
SELECT
(
(
SELECT TOP 1 counter_value
FROM @cntr
WHERE counter_name = 'Average Latch Wait Time (ms)'
ORDER BY [Time] DESC
)
-
(
SELECT TOP 1 counter_value
FROM @cntr
WHERE counter_name = 'Average Latch Wait Time (ms)'
ORDER BY [Time] ASC
)
)
/
(
(
SELECT TOP 1 counter_value
FROM @cntr
WHERE counter_name = 'Average Latch Wait Time base'
ORDER BY [Time] DESC
)
-
(
SELECT TOP 1 counter_value
FROM @cntr
WHERE counter_name = 'Average Latch Wait Time base'
ORDER BY [Time] ASC
)
) [Average Latch Wait Time (ms)] ;
END
ELSE
BEGIN
SELECT 0 [Average Latch Wait Time (ms)] ;
END
Being able to retrieve these performance counters allows a DBA to quickly tackle an issue without the involvement of Windows engineers. In some environments, in which enterprise monitoring solutions such as SOC (System Operations Center) are not available, this methodology can also be used toward creating a tactical monitoring solution. This will be discussed in depth in Chapter 9.
Summary
Using SQL Server metadata to create intelligent routines can drastically reduce your workload and help improve consistency and performance throughout your enterprise. PowerShell provides the ability to iterate through every instance in your environment, or even every database in every instance. This is implemented through the simple use of nested foreach loops.
SQL Server metadata can also be interrogated to retrieve performance statistics that are pertinent to the instance. This can reduce resolution times of performance-related incidents, as it removes the reliance on the Windows administration team, for configuring Perfmon traces.