Chapter 10. Common Database Maintenance Tasks

Maintaining a SQL Server database is critical to ensuring that your database environment performs reliably and efficiently. Although advances in the SQL Server platform have made maintenance activities simpler and more straightforward, it is critical that DBAs understand the following:

  • Which database maintenance tasks need to be performed and why

  • When to perform them

  • How to perform them

In this chapter, we cover a series of important database maintenance tasks, how to determine whether and when they need to be performed, and ways to perform those tasks. Subsequent chapters build upon this one by showing you how to use different tools, such as PowerShell, and by showing you how to define maintenance plans to help automate your work.

Backing Up and Restoring

If you talked to 100 DBAs, it's a pretty safe bet that all of them would list backing up and restoring databases as two of the most important maintenance tasks they perform. Backup and restore are so important that we've already covered each topic separately in its own chapter. Chapter 8 shows you how to take backups, and Chapter 9 shows you how to restore your database from those backups.

At the risk of repeating ourselves, we want to stress that, as a DBA, you must devise a backup and restore strategy that will guarantee the integrity of the data in the databases that you manage and that will minimize the amount of downtime that you experience in the case of a catastrophic failure. Such a strategy is referred to as a disaster recovery (DR) plan. You simply must have such a plan. If you don't have one, make creating it a priority.

Note

For help in creating a disaster recovery plan, we recommend James Leuttkehoelter's excellent book Pro SQL Server Disaster Recovery, which is also published by Apress. James takes a holistic approach to disaster recovery that encompasses not only the technical side of things but also the human and business sides.

Checking the Database Integrity

Databases should be checked at regular intervals for consistency. We recommend checking the database consistency prior to a database backup. Later, in Chapter 12, when we discuss maintenance plans, you will see how you can set the order of tasks, which allows you to perform the database consistency check first. Consistency issues are typically the result of some form of hardware failure, but they can also occur when the database server is powered off unexpectedly.

You can run four Transact-SQL commands to check various levels of consistency of your database, as described in Table 10-1.

Table 10.1. Commands for Checking Database Integrity

T-SQL Command

What It Does

DBCC CHECKALLOC

Checks the consistency of disk space allocation structures for a database

DBCC CHECKCATALOG

Checks the consistency between the system metadata tables for a specified database

DBCC CHECKTABLE

Checks all pages and structures in a table or indexed view for consistency

DBCC CHECKDB

Runs DBCC CHECKALLOC, DBCC CHECKCATALOG, and DBCC CHECKTABLE and validates the following: Indexed view contents Any Service Broker data stored in the database Consistency between table metadata and file system directories when using FILESTREAM

You should always run the DBCC CHECKDB command to check database integrity prior to backing up, except in cases where the entire database check is too time-consuming. In the event the check is too time-consuming, you can take one of the following approaches:

  • Run the three independent DBCC commands that are part of CHECKDB separately to accommodate your schedule.

  • Use the PHYSICAL_ONLY option for CHECKDB. The PHYSICAL_ONLY option can greatly reduce run time, since it checks only the physical structures in the database for consistency.

In either case, you should run CHECKDB with no options periodically for your databases to provide the most comprehensive check of your database consistency. Figure 10-1 provides an example.

Running the DBCC CHECKDB command

Figure 10.1. Running the DBCC CHECKDB command

The results of the DBCC CHECKDB command (or any of the other DBCC commands listed here) will be posted to the SQL log. If the integrity of the database is sound, you will see a log message as in Figure 10-2, indicating no errors detected.

Log results of the DBCC CHECKDB command

Figure 10.2. Log results of the DBCC CHECKDB command

If you do find corruption in your database, you want to take corrective action as soon as possible. First you need to determine whether the database can be repaired. You can do this by using one of the following options with the DBCC CHECKALLOC, DBCC CHECKDB, or DBCC CHECKTABLE commands:

  • REPAIR_REBUILD: This does minor repairs and also rebuilds indexes without risk of data loss (this will not repair errors with FILESTREAM data).

  • REPAIR_ALLOW_DATA_LOSS: This repairs corruption even when the result is some data loss. Be careful with this option! We do not recommend using it on a production database except as a last resort. Restore from a backup before choosing to lose data.

  • REPAIR_FAST: This is supported only for backward compatibility; it performs no repair.

These repair options require the database to be in single-user mode, which means you will need downtime. If data loss cannot be tolerated and REPAIR_REBUILD does not correct the issue, then you will have to restore the database from a backup.

Shrinking the Database

Sometimes a database will have a lot of unused space. For example, you may periodically archive data and delete it from your database, creating a significant amount of unused space. The same is true for the database log; you might have situations where the log grows very large during a data load or with long-running transactions that prevent log truncation. If the size of your database grows out of control, it can cause serious system resource issues. There are two best practices you should follow:

  • Never use the AUTO_SHRINK option on a production database. This option is set to OFF by default in SQL Server 2008.

  • Shrink a production database only when required, that is, when you have a need to recover space when there is a large amount of free space in the database. The reason for this is that the SHRINK operation can cause heavy index fragmentation. See the next section for how to address index fragmentation.

You can use two DBCC commands to shrink a database, as indicated in Table 10-2. DBCC SHRINKFILE offers a better degree of control, but it also requires more detailed knowledge and understanding of the shrinking process. We recommend using DBCC SHRINKDATABASE unless you are looking to shrink a specific data or log file.

Table 10.2. Commands to Shrink a Database

T-SQL Command

What It Does

DBCC SHRINKDATABASE

Shrinks the size of the database and log files in a database

DBCC SHRINKFILE

Shrinks the size of individual database files and log files

You can use the system stored procedure sp_spaceused to determine how much free space exists in your database, as follows:

-- System stored procedure
use AdventureWorks;
exec dbo.sp_spaceused

This stored procedure will return the total database size (data and log), the unallocated space, the reserved space, the data size, the index size, and the amount of unused but reserved space. See Figure 10-3 for an example of running this command in Management Studio.

Running the sp_spaceused stored procedure in Management Studio

Figure 10.3. Running the sp_spaceused stored procedure in Management Studio

The most common area in which you will see uncontrolled growth will be your log file. The following DBCC command will tell you, for all the databases on your server, the size of the log file and the percentage of the log that is used. See Figure 10-4 for a sample of the execution of this command.

DBCC SQLPERF (LOGSPACE)
Running the sp_spaceused stored procedure in Management Studio

Figure 10.4. Running the sp_spaceused stored procedure in Management Studio

If you are managing your transaction log, then you should see numbers from this command that are within your defined tolerances. For example, you may want to ensure that your log file is never greater than 70 percent used and never less than 30 percent used. In the case of the former, you would grow your transaction log, and in the latter you would shrink your log.

Reorganizing and Rebuilding Indexes

Indexes are critical to the execution performance of queries in your database. An index consumes disk space and is updated when the columns associated with the index in a table are impacted by write or update operations.

When an index is first created, there is no fragmentation associated with the index. As write and update operations that impact the index are performed on the underlying table, the index becomes fragmented, which can cause SQL Server to perform additional reads across scattered pages in the database. This can significantly impact your query performance. There are two reasons for fragmentation:

  • Logical ordering not matching physical ordering: This type of fragmentation occurs when the leaf pages of an index are not in physical order on the page. This requires SQL Server to perform additional work to sort the index. This type of fragmentation is not a big deal for small queries or those that do not require a sort.

  • Scattered index information: This can occur for a few reasons. One is that there is too much free space on index pages. (You can control that free space with the fill factor setting when creating an index.) The other reason is that as operations occur against the database, the index information is stored on scattered pages across the database because of page splits on full index pages. Whatever the cause, scattered index information will cause your query performance to suffer and cause your indexes to grow larger than necessary.

The "fix" for performance degradation because of a fragmented index is simple. You essentially "start over" by rebuilding or reorganizing the index. Rebuilding an index is essentially dropping and re-creating an index on a table. Reorganizing an index is realigning the index to remove fragmentation. Indexes should be rebuilt when index fragmentation is greater than 30 percent. Indexes should be reorganized when index fragmentation is between 5 percent to 30 percent. (These numbers and thresholds come from Books Online.)

Index rebuilding uses more CPU than reorganizing, and it locks database resources. SQL Server 2008 Enterprise Edition gives you the option ONLINE, which you can enable when rebuilding an index. The ONLINE option will keep an index available during the rebuilding process. An index reorganize operation keeps the index available by default.

Detecting Undue Fragmentation

So, how do you determine the indexes that are fragmented? SQL Server 2008 provides a system management view that will show you the fragmentation. The following query gives you a listing of index fragmentation for all indexes in a database with the most fragmented indexes on top. See Figure 10-5 for an example execution.

USE Adventureworks
Go
SELECT a.index_id, b.name as index_name, avg_fragmentation_in_percent,
 c.name as table_name
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS a
JOIN sys.indexes AS b
ON a.object_id = b.object_id AND a.index_id = b.index_id
JOIN sys.tables c
ON a.object_id=c.object_id
WHERE avg_fragmentation_in_percent <> 0
ORDER BY avg_fragmentation_in_percent DESC
GO
Index fragmentation query and results

Figure 10.5. Index fragmentation query and results

Now we have a list of indexes that need to be rebuilt or reorganized. With this list and the best practice recommendations we talked about earlier about when to reorganize or rebuild indexes, you can address the index fragmentation issues in your database.

Rebuilding an Index

You can rebuild an index in Transact-SQL in two ways. The first is to issue the ALTER INDEX command with the REBUILD option. The command causes SQL Server 2008 to drop the existing index and re-create a new one with the same name.

For example, the following command rebuilds all indexes on the Production.Product table in the AdventureWorks database:

USE AdventureWorks;
GO
ALTER INDEX ALL ON Production.Product REBUILD
GO

Your second option is to issue a CREATE INDEX command with the DROP_EXISTING option. This command causes SQL Server to drop the existing index and re-creates it. The effect is the same as if you had issued an ALTER INDEX...REBUILD command.

Note

If you are working with older scripts, especially those created to manage databases that have been migrated forward from earlier versions of SQL Server, you might encounter the command DBCC_DBREINDEX. That command is deprecated and is supported only for legacy purposes. It does the same thing as ALTER INDEX...REBUILD.

Reorganizing an Index

The process of reorganizing an index reduces fragmentation by physically reorganizing the leaf nodes of the index without dropping the index. An index reorganize uses less CPU resources, and the index is available for query and data operations during the reorganizing process. For example, the following command reorganizes the PXML_ProductModel_CatalogDescription index on the Production.ProductModel table:

USE AdventureWorks;
GO
ALTER INDEX PXML_ProductModel_CatalogDescription ON Production.ProductModel REORGANIZE
GO

Getting Updated Statistics

The SQL Server 2008 query optimizer uses statistics, in the form of the distribution of values in an index or column, to create query plans to improve query performance. During periods of heavy activity on a table, the statistics may not reflect the true data distribution in a given index or column. One feature that can help in that situation is SQL Server 2008's ability to automatically create and update statistics.

Note

If the automatic creation and updating of statistics have been turned off in SQL Server 2008, turn them back on database-wide! Automatic updating is an important feature and can save you a lot of headache with respect to maintenance and query tuning.

When the SQL Server 2008 query optimizer requires statistics for a particular column in a table that has undergone substantial update activity since the last time the statistics were created or updated, SQL Server automatically updates those statistics by sampling the column when you have the automatic updating and creation of statistics features turned on. That automation saves you the work of having to manually maintain those statistics.

Checking the Status of Automatic Statistics Gathering

You can use the following command to see the status of AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS for your user databases. Normally the system databases are assigned database_ID 1 through 4, so the WHERE clause removes results for databases with those ID numbers:

SELECT name AS "Name",
    CASE is_auto_create_stats_on when 1 then 'on' else 'off' end
 AS "AUTO_CREATE_STATISTICS",
    CASE is_auto_update_stats_on when 1 then 'on' else 'off' end
  AS " AUTO_UPDATE_STATISTICS "
FROM sys.databases
WHERE database_ID > 4;

The query yields results similar to the following:

Name                 AUTO_CREATE_STATISTICS  AUTO_UPDATE_STATISTICS

-------------------- ---------------------- ------------------------
ReportServer         on                     on

ReportServerTempDB   on                     on

AdventureWorks       on                     on

mdw                  on                     on

AdventureWorksDW     on                     on

AdventureWorksDW2008 on                     on

AdventureWorksLT     on                     on

AdventureWorksLT2008 on                     on

You can easily see from these results which databases have automatic creation and updating of statistics enabled and which do not.

Manually Updating Statistics

You can augment the automatic updating of statistics by manually updating statistics. Do that with the Transact-SQL statement UPDATE STATISTICS or with the stored procedure sp_updatestats. We recommend not updating statistics too frequently because there is a performance trade-off between improving query plans and the time it takes to recompile queries. The specific trade-offs depend on your application.

There are a few cases when you should update statistics in your database manually:

  • You should run update statistics manually as part of the database maintenance process when database operations are performed that change the distribution of data, such as after performing bulk inserts or table truncation. Manually updating statistics in these cases will prevent query delays because of automatic statistics updates induced by subsequent query operations.

  • You should run it when small numbers of records are added to a table and those records involve ascending or descending key columns that are IDENTITY or real-time timestamp types. Otherwise, the lack of cardinality estimates against those new records can result in queries that perform slowly.

To update statistics for an entire database, you can run the sp_updatestats stored procedure:

USE AdventureWorks;
GO
EXEC sp_updatestats

That yields the following result (for each table, the result is abbreviated):

Updating [Sales].[Store]
    [PK_Store_CustomerID], update is not necessary...
    [AK_Store_rowguid], update is not necessary...
    [IX_Store_SalesPersonID], update is not necessary...
    0 index(es)/statistic(s) have been updated, 3 did not require update.
...

And for a finer degree of control, use the UPDATE STATISTICS Transact-SQL command, which allows you to update statistics for an index or named set of columns on a table or indexed view. For example:

USE AdventureWorks;
GO
UPDATE STATISTICS Sales.SalesOrderDetail AK_SalesOrderDetail_rowguid;
GO

Monitoring Logs

SQL Server activity is logged in SQL Server and Windows log files as well as to the standard SQL Server log file. It is your responsibility to monitor these logs to determine that the system is behaving normally and that there are no errors or warnings in the log files that indicate that a problem has occurred or is about to occur.

SQL Server logs system events and user-defined events to the SQL Server error log and the Microsoft Windows Application log (if configured). Both logs automatically timestamp all recorded events. Use the information in these logs to troubleshoot problems related to SQL Server.

The Windows Application log provides an overall view of events that occur on the Windows operating system, as well as events in SQL Server 2008. Use the Windows Event Viewer to view the Windows Application log and to filter events based on criteria you are interested in, such as critical, errors, and warnings events for the SQLSERVER and SQLSERVERAGENT event source. Figure 10-6 shows how to filter the event log.

Filtering the Windows event log

Figure 10.6. Filtering the Windows event log

The result of applying the filter is a refined set of events, as in Figure 10-7.

Filtered Windows event log

Figure 10.7. Filtered Windows event log

The SQL Server error log contains log events you use to ensure that processes have completed successfully (for example, backup and restore operations, batch commands, or other scripts and processes). You can view the SQL Server error log by using SQL Server Management Studio or any text editor.

SQL Server Management Studio allows you to look at SQL Server log events or an integrated set of SQL Server log events and Windows log events. This is a great feature that can help you see a detailed set of events that has occurred as you investigate a potential issue. To view the SQL Server error log from Management Studio, follow these steps:

  1. In Object Explorer, expand your server, and expand the Management folder.

  2. Right-click SQL Server Logs, click View, and then select SQL Server Log or SQL Server and Windows Log.

In Figure 10-8, we have selected the SQL Server and Windows Log to show you the integrated output. The results in Figure 10-8 can be filtered in a fashion similar to the Windows event log, although the filtering capability is not as robust.

SQL Server Log File Viewer—display integrated Windows and SQL events

Figure 10.8. SQL Server Log File Viewer—display integrated Windows and SQL events

Creating SQL Server Agent Jobs

Up to this point, we have talked about the maintenance tasks that you need to consider as part of the effective operation of your SQL Server database environment. One thing you will realize very quickly is that database maintenance is a lot of work. Frankly, some of it is tedious, and we're sure we're not offending anyone if we say that performing database maintenance tasks is not the most exciting thing you will do as a DBA. In our experience, the more maintenance we can relegate to automation, the better. The only thing you really need to know is when something goes wrong, such as when your backup fails because you ran out of disk space on the backup volume!

SQL Server Agent is just the tool to assist with the automation of your database maintenance tasks. SQL Server Agent is a Windows service that runs continuously. Through defining and scheduling actions called jobs, Agent can automatically perform work against your SQL Server. For example, SQL Server Agent can handle scheduled backups. SQL Server Agent can also be used for a variety of other tasks, including to alert you if any performance events happen such as the occurrence of any deadlocks or for tracking SQL Server Alerts.

The best way to explain how to use the SQL Server Agent is by example, so we will create a backup job that will perform every day at midnight. We will use the SQL Server Management Studio (SSMS) to create our job.

Connecting to SQL Server

Once you connect to the SQL Server database engine, you'll notice one of three different states of the SQL Server Agent node in Object Explorer. First, if you don't see the SQL Server Agent node there at all, then you probably are not a member of the sysadmin role and you're not granted any specific access to SQL Server Agent. The second state that SQL Server Agent could be in is the off state. If you perform a default installation of SQL Server and don't explicitly tell the setup to start SQL Server Agent, you'll see the node in Object Explorer that is shown last in Figure 10-9.

SQL Server Agent Status in Management Studio: stopped

Figure 10.9. SQL Server Agent Status in Management Studio: stopped

It's important to briefly discuss what is meant by "Agent XPs disabled." In SQL Server, a large effort was placed on restricting the so-called surface area for security-related attacks. One of the features that was implemented was the ability to disable the execution of extended stored procedures such as xp_cmdshell, xp_sendmail, and in this case, xp_sqlagent_notify. These extended stored procedures are logically grouped inside SQL Server. You can list these groups by issuing an sp_configure statement, as shown here:

SP_CONFIGURE  'show advanced', 1
GO
RECONFIGURE
GO
SP_CONFIGURE

The result set returns about 60 different global configuration settings. Although most of these settings aren't related to the enabling and disabling of extended stored procedures, if you look through this list, you'll see "Agent XPs" listed. When the value is 1, Agent-specific extended stored procedures, such as xp_sqlagent_notify, as well as other procedures, such as those found in SMO calls, will be enabled.

At this point, you might be wondering whether you have to manually go to the Query Editor and issue a call to enable Agent XPs when you want your SQL Agent to work. Thankfully, this isn't the case. When the SQL Server Agent service is started, it will automatically enable the Agent XPs, and when it's stopped, it will automatically disable the Agent XPs. Note that this is the only time these extended stored procedure groupings will be automatically enabled and disabled. For normal use of SQL Agent, you'll never need to worry about manually changing this setting.

Returning to the example, in Object Explorer, you see the SQL Agent node with the words "Agent XPs disabled." As previously mentioned, if you right-click this and select Start, the Agent service will automatically enable the Agent XPs, and you can now use SQL Server Agent. When SQL Server Agent is started and the Agent XPs group is enabled, Object Explorer will show the Agent node as enabled, as you see here in Figure 10-10.

SQL Server Agent Status in Management Studio: started

Figure 10.10. SQL Server Agent Status in Management Studio: started

At this point, you're ready to create your SQL Server Agent job.

Creating an Agent Job

Once you're connected to SQL Server and have started the SQL Server Agent service, expand the SQL Server Agent node. Your screen should look like Figure 10-11.

SQL Server Agent node in Object Explorer

Figure 10.11. SQL Server Agent node in Object Explorer

To create a job, right-click the Jobs node, and select New Job. This will launch the New Job dialog box shown in Figure 10-12.

General tab in the New Job dialog box

Figure 10.12. General tab in the New Job dialog box

The General tab allows you to enter some metadata about the job, such as its name, description, and who the owner of the job is.

Note

Only members of the sysadmin role can change the owner of a job.

Referring to the original backup scenario, let's give our job an appropriate name like FullDBBackup. Once you've given the job a name, you can proceed to add steps to the job.

Jobs can have one or more steps to them, and each step can be one or more of the following types: T-SQL, ActiveX Script, Operating System (CmdExec), Replication (there are actually five Replication subsystems, but for the most part these are configured using Replication wizards and dialog boxes, and users usually don't manually create replication job steps), SQL Server Analysis Services Command and Query, and SQL Server Integration Services (SSIS) package. Since jobs don't have to contain the same job step types, it's possible to have a job that first executes some T-SQL against SQL Server, then runs an SSIS package, and finally processes an Analysis Services cube. In this example, all you need is a single T-SQL job step. When you click the Steps tab, you're presented with a grid listing the steps within your job, as shown in Figure 10-13.

Steps tab in the New Job dialog box

Figure 10.13. Steps tab in the New Job dialog box

The tab in Figure 10-13 allows you to add, remove, and edit job steps as well as define which job step will be executed first. Since there's only one job step in your example, the backup of the database itself, the first step will be the starting step.

Every job step requires a unique name. Once you give this particular step a name, you can add the T-SQL script for backing up the database as the command. Click the New button to add a new step, and you'll be presented with the New Job Step dialog box where you can enter the following information. The remaining fields can use default values:

  • Step Name: Backup Database Step

  • Type: Transact-SQL script (T-SQL)

  • Database: <your database> "Adventureworks" in this case

  • Command: BACKUP DATABASE [AdventureWorks] TO DISK=N'C:Program FilesMicrosoft SQL ServerMSSQL10.MSSQLSERVERMSSQLBackupAdventureWorks.bak' WITH NOFORMAT, NOINIT, NAME=N'AdventureWorks-Full Database Backup', SKIP, NOREWIND, NOUNLOAD

The result should look like that in Figure 10-14.

General tab in New Job Step dialog box

Figure 10.14. General tab in New Job Step dialog box

Once you've added the job steps, it's time to define the schedule. In this example, you want the full database backup to occur every day at midnight. In the New Job dialog box, you'll find a Schedule tab. Clicking this tab, you see another grid that lists all the schedules that will execute your job. In this case, you will define a new schedule for your database backup job. Click the New button to add a new job schedule, and you'll be presented with the New Job Schedule dialog box where you can enter the following information (the remaining fields can use default values), as shown in Figure 10-15:

  • Step Name: Full Backup Schedule

  • Occurs: Daily

In previous versions of SQL Server Agent, a single job could have zero or more schedules defined, and no two jobs could share the same schedule (although they could still have two separate schedules that are defined with the same time period). In SQL Server 2008, the same schedule can be shared among any jobs that the user owns. Once you've defined a schedule and clicked OK the New Job dialog box, your database backup job is now ready for automatic execution.

New Job Schedule dialog box

Figure 10.15. New Job Schedule dialog box

The final step is to click OK in the New Job dialog box (where you started) to complete the SQL Server Agent job. When you finish, you'll see your new job in the list of SQL Server Agent jobs (the circled job in Figure 10-16).

SQL Server Agent job list including the backup job

Figure 10.16. SQL Server Agent job list including the backup job

Defining Alerts

Since monitoring logs can be a very manual process from a maintenance perspective, SQL Server Agent provides a mechanism to detect SQL Server events written to the Application log. In addition to monitoring SQL Server events, SQL Server Agent can also monitor performance conditions and Windows Management Instrumentation (WMI) events.

To define an alert using SQL Server Agent, you need the following:

  • The name of the alert

  • The event or performance condition that triggers the alert

  • The action that SQL Server Agent takes in response to the event or performance condition

Alerts are configured to detect the following event types:

  • SQL Server events

  • SQL Server performance conditions

  • WMI events

So, alerts are a very powerful feature and can help you manage a large number of databases. The key is to identify the set of events that will help you more effectively manage your database environment. As a rule of thumb, you should start with a small set of alerts and then add alerts as required. That way, you don't overburden yourself with too many alerts that aren't meaningful. These are the alerts we recommend starting with:

  • Alerts on each of the level 16 through 25 severity errors (we'll show a script in a few moments that you can use to add each of these alerts via Transact-SQL).

  • Alerts for the log file in msdb and tempdb. You can add your primary database log files here as well.

  • Alerts for the physical drive or drives holding the log to help you avoid running out of space on those drives.

The following are the steps to create a SQL Server Agent alert for an msdb "log full" event:

  1. In Object Explorer, expand your server, and expand SQL Server Agent.

  2. Right-click Alerts, and select New Alert.

  3. In the Name box, enter msdb log full alert.

  4. Select the Enable check box to enable the alert to run. By default, Enable is selected.

  5. In the Type box, click "SQL Server event alert."

  6. In the "Database name" list, select "msdb."

  7. Click the "Error number" radio button, and then type in 9002. For additional errors, you may want to track based on monitoring the Windows log; you can get this number in the Event Viewer.

    Figure 10-17 shows the completed General page.

    Creating a SQL Server alert: General page

    Figure 10.17. Creating a SQL Server alert: General page

  8. If you have defined a SQL Server operator, then click the Response page, and select Notify Operators. In the example shown in Figure 10-18, the operator will be notified by email when the alert condition is met. If you have not created an operator, you can create one by clicking the New Operator button at the bottom of the screen. Then enter a name such as Carmen_the_operator and an email name such as .

    Creating a SQL Server alert: Response page

    Figure 10.18. Creating a SQL Server alert: Response page

  9. Finally, select any additional options like "Include error text in email." See Figure 10-19 for an example. Then click OK to complete the definition of the alert.

Creating a SQL Server alert: Options page

Figure 10.19. Creating a SQL Server alert: Options page

Since an alert is enabled by default, your newly created alert will be active and will notify the operator when the msdb log is full. You can use alerts to detect resource thresholds, such as disk or CPU utilizations beyond a specific tolerance level. Alerts are an important part of a well-designed database maintenance plan.

Since the recommendation was to create a set of alerts as a starting point, using the Management Studio interface to create each one of those might be laborious. A simpler way to create a set of alerts is to create a Transact-SQL script to perform the operations. The following is an example of creating the "Severity 16 error alert" with notification to an operator. You can copy the script for each of the severities 17–25, making sure that you change the @severity, @name, and @alert_name parameters for each severity level you are creating an alert for.

USE [msdb]
GO
/****** Create the alert  ******/
EXEC msdb.dbo.sp_add_alert @name=N'Severity 16 Error Alerts',
        @message_id=0,
        @severity=16,
        @enabled=1,
        @delay_between_responses=10,
        @include_event_description_in=5,
        @category_name=N'[Uncategorized]',
        @job_id=N'00000000-0000-0000-0000-000000000000'
GO
/****** Add the notification method for the alert  ******/
EXEC msdb.dbo.sp_add_notification
@alert_name=N'Severity 16 Error Alerts',
@operator_name=N'Carmen_the_operator',
@notification_method = 1
GO

Summary

This chapter has covered the most common maintenance tasks that you will be responsible for as a DBA. We're not done yet, though! Next we'll talk more about automating those tasks using tools and technologies such as Microsoft's PowerShell scripting language. (You'll learn about that in Chapter 11.) Then we'll cover SQL Server maintenance plans in Chapter 12. Support for defined maintenance plans is a new feature in SQL Server 2008 that gives you an easy and intuitive way to schedule and perform repetitive and otherwise boring tasks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset