Then, connect to the server as user: smurf using SSMS and default database to localDb and try to run the procedure. You will notice that the connection isn’t to the contained database; it is to the server and you are in the context of the contained database. Using EXECUTE AS will give you the same effect:

EXECUTE AS USER = ’PapaSmurf’;
go
EXECUTE dbo.ExternalDb$TestCrossDatabase;
GO
REVERT;
GO

You will see that it behaves the exact same way and gives you a result to the query. However, connecting with a contained user is a different challenge. First, create a contained user, and then give it rights to execute the procedure:

CREATE USER Gargamel WITH PASSWORD = ’Nasty1$’;
GO
GRANT EXECUTE ON dbo.ExternalDb$TestCrossDatabase to Gargamel;

Next, change to the database security context of the new contained user and try to change context to the externalDb:

EXECUTE AS USER = ’Gargamel’;
GO
USE ExternalDb;

This will give you the following error (plus or minus some characters in that server principal moniker, naturally):

Msg 916, Level 14, State 1, Line 1
The server principal "S-1-9-3-3326261859-1215110459-3885819776-190383717." is not able to access the database "ExternalDb" under the current security context.

Obviously the “server principal” part of the error message could be confusing, but it is also true because in this case, the database will behave as a server to that user. Executing the following code will give you the exact same error:

EXECUTE dbo.ExternalDb$TestCrossDatabase;
GO
REVERT;
GO

When turning on containment, you will note that since the maximum containment level is PARTIAL, some code you have written may not be containment safe. To check, you can use the sys.dm_db_uncontained_entities dynamic management view. To find objects that reference outside data, you can use the following query:

SELECT  OBJECT_NAME(major_id) AS object_name,statement_line_number,
        statement_type, feature_name, feature_type_name
FROM    sys.dm_db_uncontained_entities
WHERE   class_desc = ’OBJECT_OR_COLUMN’;

For our database, it will return the following, which corresponds to the procedure and the query that used a cross-database reference:

object_name                   statement_line_number statement_type  
----------------------------- --------------------- ---------------
ExternalDb$TestCrossDatabase  3                     SELECT          
feature_name                       feature_type_name
-------------------------------------------------------
Server or Database Qualified Name  T-SQL Syntax

The object will also return uncontained users:

SELECT  USER_NAME(major_id) AS USER_NAME
FROM    sys.dm_db_uncontained_entities
WHERE   class_desc = ’DATABASE_PRINCIPAL’
  and   USER_NAME(major_id) <> ’dbo’;

And you created one already in this chapter:

  USER_NAME            
  ---------------------
  PapaSmurf                

Image Note  One additional very interesting (albeit non-security-related) feature of contained databases is that the collation of the tempdb as seen from the contained user will be that of the contained database. While this is not frequently an issue for most databases, it will make life easier for moving databases around to servers with different collations. I won’t cover that feature in any other location in this book.

Finally, to do a bit of housekeeping and remove containment from the database, delete the contained user you created and turn off containment (you have to drop the user or, as previously mentioned, you would receive an error stating that uncontained databases cannot have contained users):

DROP USER Gargamel;
GO
USE Master;
GO
ALTER DATABASE localDB  SET CONTAINMENT = NONE;
GO
USE LocalDb;
GO

Using Impersonation to Cross Database Lines

Impersonation can be an alternative to using the DB_CHAINING setting. Now, you no longer need to set the chaining to ON; all you need is to set it to TRUSTWORTHY, since you will be executing code that reaches out of the current database.

ALTER DATABASE localDb
   SET DB_CHAINING OFF;
ALTER DATABASE localDb
   SET TRUSTWORTHY ON;
ALTER DATABASE externalDb
   SET DB_CHAINING OFF;

Now, you can rewrite the procedure like this, which lets the person execute in the context of the owner of the schema that the procedure is in:

CREATE PROCEDURE dbo.ExternalDb$testCrossDatabase_Impersonation
WITH EXECUTE AS SELF --as procedure creator, who is the same as the db owner
AS
SELECT Value
FROM   ExternalDb.dbo.Table1;
GO
GRANT EXECUTE ON dbo.ExternalDb$TestCrossDatabase_Impersonation to PapaSmurf;

If the login of the owner of the dbo schema (in this example sa, because I set the owner of both databases to sa) has access to the other database, you can impersonate dbo in this manner. In fact, you can access the external resources seamlessly. This is probably the simplest method of handling cross-database chaining for most corporate needs. Of course, impersonation should be used very carefully and raise a humongous flag if you’re working on a database server that’s shared among many different companies.

Because setting TRUSTWORTHY to ON requires sysadmin privileges, using impersonation isn’t a tremendous hole, but note that the members of the sysadmin role aren’t required to understand the implications if one of their users calls up and asks for TRUSTWORTHY to be turned on for them.

Now, when you execute the procedure as PapaSmurf user, it works:

EXECUTE AS USER = ’PapaSmurf’;
GO
EXECUTE dbo.ExternalDb$TestCrossDatabase_Impersonation;
GO
REVERT;

If you toggle TRUSTWORTHY to OFF and try to execute the procedure

ALTER DATABASE localDb  SET TRUSTWORTHY OFF;
GO
EXECUTE dbo.ExternalDb$TestCrossDatabase_Impersonation;

no matter what user you execute as, you’ll receive the following error:

Msg 916, Level 14, State 1, Procedure externalDb$testCrossDatabase_Impersonation, Line 4
The server principal "sa" is not able to access the database "ExternalDb" under the current security context.

This is clearly another of the confusing sorts of error messages you get on occasion, since the server principal sa ought to be able to do anything, but it is what it is. Next, go back to the containment method. Turn back on TRUSTWORTHY, set the containment, and re-create the Gargamel user, giving rights to the impersonation procedure:

ALTER DATABASE LocalDb  SET TRUSTWORTHY ON;
GO
ALTER DATABASE LocalDB  SET CONTAINMENT = PARTIAL;
GO
CREATE USER Gargamel WITH PASSWORD = ’Nasty1$’;
GO
GRANT EXECUTE ON ExternalDb$testCrossDatabase_Impersonation TO Gargamel;
Now execute the procedure in the context of the contained user:
EXECUTE AS USER = ’Gargamel’;
GO
EXECUTE dbo.ExternalDb$TestCrossDatabase_Impersonation;
GO
REVERT;

This time, you will see that no error is raised, because the procedure is in the context of the owner of the procedure and is mapped to a server principal that is the same that owns the database and the object you are using. Note that this breaks (or really, violates) containment because you are using external data, but it will give you the rights you need in the (hopefully) rare requirement to use cross-database access.

Finally, clean up the users and containment as you have done before:

DROP USER Gargamel;
GO
USE Master;
GO
ALTER DATABASE localDB  SET CONTAINMENT = NONE;
GO
USE LocalDb;

Using a Certificate-Based Trust

The final thing I’ll demonstrate around cross-database access is using a single certificate installed in both databases to let the code access data across database boundaries. You’ll use it to sign the stored procedure and map a user to this certificate in the target database. This is a straightforward technique and is the best way to do cross-database security chaining when the system isn’t a dedicated corporate resource. It takes a bit of setup, but it isn’t overwhelmingly difficult. What makes using a certificate nice is that you don’t need to open the hole left in the system’s security by setting the database to TRUSTWORTHY. This is because the user who will be executing the procedure is a user in the database, just as if the target login or user were given rights in the externalDB. Because the certificate matches, SQL Server knows that this cross-database access is acceptable.

First, turn off the TRUSTWORTHY setting:

USE LocalDb;
GO
ALTER DATABASE LocalDb
   SET TRUSTWORTHY OFF;

Check the status of your databases as follows:

SELECT name,
       SUSER_SNAME(owner_sid) AS owner,
       is_trustworthy_on, is_db_chaining_on
FROM   sys.databases
WHERE name IN (’LocalDb’,’ExternalDb’);

This should return the following results (if not, go back and turn off TRUSTWORTHY and chaining for the databases where necessary):

name       owner      is_trustworthy_on is_db_chaining_on
---------- ---------- ----------------- -----------------
externalDb sa         0                 0
localDb    sa         0                 0

Now, create another procedure and give the user PapaSmurf rights to execute it, just like the others (which won’t work now because TRUSTWORTHY is turned off):

CREATE PROCEDURE dbo.ExternalDb$TestCrossDatabase_Certificate
AS
SELECT Value
FROM   ExternalDb.dbo.Table1;
GO
GRANT EXECUTE on dbo.ExternalDb$TestCrossDatabase_Certificate to PapaSmurf;

Then, create a certificate:

CREATE CERTIFICATE ProcedureExecution ENCRYPTION BY PASSWORD = ’jsaflajOIo9jcCMd;SdpSljc’
 WITH SUBJECT =
         ’Used to sign procedure:ExternalDb$TestCrossDatabase_Certificate’;

Add this certificate as a signature on the procedure:

ADD SIGNATURE TO dbo.ExternalDb$TestCrossDatabase_Certificate
     BY CERTIFICATE ProcedureExecution WITH PASSWORD = ’jsaflajOIo9jcCMd;SdpSljc’;

Finally, make an OS file out of the certificate, so a certificate object can be created in the ExternalDb based on the same certificate (choose a directory that works best for you):

BACKUP CERTIFICATE ProcedureExecution TO FILE = ’c:	empprocedureExecution.cer’;

This completes the setup of the localDb. Next, you have to apply the certificate to the externalDb:

USE ExternalDb;
GO
CREATE CERTIFICATE ProcedureExecution FROM FILE = ’c: empprocedureExecution.cer’;

After that, map the certificate to a user, and give this user rights to the table1 that the user in the other database is trying to access:

CREATE USER ProcCertificate FOR CERTIFICATE ProcedureExecution;
GO
GRANT SELECT on dbo.Table1 TO ProcCertificate;

Now, you’re good to go. Change back to the LocalDb and execute the procedure:

USE LocalDb;
GO
EXECUTE AS LOGIN = ’PapaSmurf’;
EXECUTE dbo.ExternalDb$TestCrossDatabase_Certificate;

The stored procedure has a signature that identifies it with the certificate, and in the external database, it connects with this certificate to get the rights of the certificate-based user. So, since the certificate user can view data in the table, your procedure can use the data.

The certificate-based approach isn’t as simple as the other possibilities, but it’s far more secure, for certain. Pretty much the major downside to this is that it does not work with views. However, now you have a safe way of crossing database boundaries that doesn’t require giving the user direct object access and doesn’t open up a hole in your security. Hence, you could use this solution on any server in any situation. Make sure to secure or destroy the certificate file once you’ve used it, so no other user can use it to gain access to your system. Then you clean up the databases used for the example.

Of the methods shown, this would be the least desirable to use with containment, because you now have even more baggage to set up after moving the database, so we will simply leave it as “it could be done, but shouldn’t be.”

This method does not work when using containment, unless you use impersonation, since the user will not have any way to access the other database. (Code included in download.)

Finally, clean up the databases used for the examples and move back to the ClassicSecurityExample database you have used throughout this chapter:

REVERT;
GO
USE MASTER;
GO
DROP DATABASE externalDb;
DROP DATABASE localDb;
GO
USE ClassicSecurityExample;

Different Server (Distributed Queries)

I want to make brief mention of distributed queries and introduce the functions that can be used to establish a relationship between two SQL Server instances, or a SQL Server instance and an OLE DB or ODBC data source. (Note that OLE DB is being deprecated and will not be supported forever. For more details, check msdn.microsoft.com/en-us/library/ms810810.aspx, which outlines Microsoft’s “Data Access Technologies Road Map.”)

You can use either of these two methods:

  • Linked servers: You can build a connection between two servers by registering a “server” name that you then access via a four-part name (<linkedServerName>.<database>.<schema>.<objectName>) or through the OPENQUERY interface. The linked server name is the name you specify using sp_addlinkedserver. This could be a SQL Server server or anything that can be connected to via OLE DB.
  • Ad hoc connections: Using the OPENROWSET or OPENDATASOURCE interfaces, you can return a table of data from any OLE DB source.

In either case, the security chain will be broken from the executing user when crossing SQL Server instance connections and certainly when using any data source that isn’t SQL Server–based. You will configure the user that is accessing the external data in the configuration. Using linked servers, you could be in the context of the Windows login you are logged in with, a SQL Server standard login on the target machine, or even a single login that everyone uses to “cross over” to the other server. The best practice is to use the Windows login where possible.

As I mentioned briefly in the previous section, one use for EXECUTE AS could be to deal with the case where you’re working with distributed databases. One user might be delegated to have rights to access the distributed server, and then, you execute the procedure as this user to give access to the linked server objects.

Using linked servers or ad hoc connections will both break the containment model. Linked servers are defined in the master database.

Obfuscating Data

It isn’t always possible, nor even desirable, to keep users from accessing data. We database administrator types all too often have unfettered access to entire production systems with far too much personal data. Even with well-designed security granularity, a few users will still need to have rights to run as a member of the sys_admin server role, giving them access to all data.

On the more administrative side of things, if DBAs have access to unencrypted backups of a database, they may be able to easily access any data in a database by simply restoring it to a different server where they are administrators. If you’re dealing with sensitive data, you need to be wary of how you deal with this data:

  • Do you back up the database? Where are these backups?
  • Do you send the backups to an offsite safe location? Who takes them there?
  • Who has access to the servers where data is stored? Do you trust the fate of your company in their hands? Could these servers be hacked?

When data is at rest in the database and users have access to the data, it is also important that we obfuscate the data such that a user cannot tell what it is exactly. This is one of the main ways that we can protect data from casual observers, especially those like us DBA types who generally have full control over the database (in other words, way too much power in the database). In this section we will look at two methods of obfuscating data:

  • Encryption: Encrypting the data so no one without the certificates, passwords, etc. can reasonably access the data. (Encryption isn’t perfect, given enough time, computing power, and knowhow to crack it.)
  • Dynamic data masking: A new feature in 2016 that allows you to mask over data in given scenarios.

Neither of these techniques is perfect, but together you can provide a very solid obfuscation layer over your data.

Encrypting Data

The following are several tools in the SQL Server arsenal to provide encryption for your data. I will not cover any of them in great detail, but if you are architecting a database solution, it is fairly essential that you know about these technologies.

  • Transparent Data Encryption: A feature that will encrypt the data and log files during I/O so that you don’t have to change your code, but anytime the data is at rest, it will stay encrypted (including when it is backed up).
  • Encrypted backups: SQL Server provides methods using the BACKUP command to make backups encrypted.
  • Encryption functions: Functions that let you encrypt values using SQL Server function calls. They are very simple to use, and the one of these features I will demonstrate. The biggest downfall is that all of the data needed to decrypt the data is stored locally to SQL Server.
  • Always Encrypted: A new feature in SQL Server 2016 that is somewhat external to the database engine. Unlike the encryption functions, the information needed to decrypt the data is not stored in SQL Server.
  • Binary columns: Not technically a feature of encryption, but prior to Always Encrypted, one solution was simply to encrypt the data externally to SQL Server and store the binary value in a column. Old school is still sometimes the best school.

The biggest “key” to any encryption strategy that is intended to obfuscate important data is that you don’t store all the information to decrypt data together with the data. It would be just as silly as taking the time to install deadbolt locks and then leaving a key on the outside of what you are protecting on a little cup hook.

I am not going to go any deeper into the concepts of encryption, because it is far too complex to cover in one section of a chapter. You just need to consider two things when pondering your encryption needs, First: “Who can decrypt the data, and how?” If the information to do the decryption is stored with the encrypted data, you are not gaining much. SQL Server’s basic encryption uses certificates and keys that are all on the local server. This can keep employees’ eyes off of sensitive data, but if a person got the entire database and a sysadmin role user, they could get to everything. SQL Server has a better method called Extensible Key Management (EKM), but it still isn’t perfect. The best method is to implement your decryption engine on a different server that is not linked to the database engine at all. Second: If someone gets the data, how easily could they decrypt it with unlimited computing power and time? Some encryption is deterministic, in that every time value ‘A’ is encrypted, the same binary string is output. This can make things like searching for encrypted values faster, but makes it easier to break. Probabilistic encryption schemes output different values every time, but will be really difficult to search for an encrypted value.

A very useful and security-minded goal you should have in most organizations is to try to avoid holding sensitive information (like credit card numbers) in your database at all and use a third-party service to handle it for you. Systems like this will give you a token back that only one organization can make use of. Of course, if you do work for a bank, I clearly could not do your encryption needs justice in a single chapter, much less this section.

In the end, if your data is stolen, the goal needs to be that the data is worthless, so that all your press release needs to say is “Company X regrets to say that some financial data tapes were stolen from our vaults. All data was encrypted; hence, there is little chance any usable data was acquired.” And you, as DBA, will be worshipped—and isn’t that really the goal of every programmer?

Using Dynamic Data Masking to Hide Data from Users

Dynamic data masking is used to obfuscate data from users, but will do nothing for the security of your data if it is stolen. What it does is, allow you to show a user a column, but instead of showing them the actual data, it masks it from their view. As an example, consider a table that has e-mail addresses. You might want to mask the data so most users can’t see the actual data when they are querying the data. Dynamic data masking falls under the head of Security Center features in Books Online (https://msdn.microsoft.com/en-us/library/mt130841.aspx), but as we will see, it doesn’t behave like classic security features, as you will be adding some code to the DDL of the table.

There are a couple of limitations. First, masks only pertain to SELECT operations. If a user can INSERT, UPDATE, or DELETE the row, they still can even if the data is masked. Hence, this feature is generally going to be used strictly for read operations. The second, fairly huge, limitation with this feature is that there is only one right to allow unmasked viewing of all masked data, and it is at the database level (not the column level, nor the table level).

To demonstrate, consider the following table (including my favorite column I have ever included in a demo, YachtCount!):

CREATE SCHEMA Demo;
GO
CREATE TABLE Demo.Person --warning, I am using very small column datatypes in this
                         --example to make looking at the output easier, not as proper sizes
(
    PersonId    int NOT NULL CONSTRAINT PKPerson PRIMARY KEY,
    FirstName    nvarchar(10) NULL,
    LastName    nvarchar(10) NULL,
    PersonNumber varchar(10) NOT NULL,
    StatusCode    varchar(10) CONSTRAINT DFLTPersonStatus DEFAULT (’New’)
                            CONSTRAINT CHKPersonStatus
                                     CHECK (StatusCode in (’Active’,’Inactive’,’New’)),
    EmailAddress nvarchar(40) NULL,
    InceptionTime date NOT NULL, --Time we first saw this person. Usually the row create time,
                                 --but not always
    -- YachtCount is a number that I didn’t feel could insult anyone of any     -- origin, ability, etc that I could put in this table
    YachtCount   tinyint NOT NULL CONSTRAINT DFLTPersonYachtCount DEFAULT (0)
                            CONSTRAINT CHKPersonYachtCount CHECK (YachtCount >= 0),
);

Some of this data we will want to keep hidden from viewers. The PersonNumber, StatusCode, EmailAddress, InceptionTime, and YachtCount values all need to be hidden away. There are four types of built-in masks that we can use:

  • Default: Takes the default mask of the datatype (not the default of the column)
  • Email: Masks the e-mail so you only see a few meaningful characters to give you an idea of what the e-mail address is, but not the full address
  • Random: Puts a random number in place of an actual number (which can actually be kind of weird, as we will see)
  • Custom String (Partial): Basically gives you control over what characters to keep and what to replace them with

Dynamic data masking becomes part of the table itself, so we will use ALTER COLUMN to add this to the columns mentioned, starting with using default for each of the columns:

ALTER TABLE Demo.Person ALTER COLUMN PersonNumber
    ADD MASKED WITH (Function = ’default()’);
ALTER TABLE Demo.Person ALTER COLUMN StatusCode
    ADD MASKED WITH (Function = ’default()’);
ALTER TABLE Demo.Person ALTER COLUMN EmailAddress
    ADD MASKED WITH (Function = ’default()’);
ALTER TABLE Demo.Person ALTER COLUMN InceptionTime
    ADD MASKED WITH (Function = ’default()’);
ALTER TABLE Demo.Person ALTER COLUMN YachtCount
    ADD MASKED WITH (Function = ’default()’);

Next, we add a few rows:

INSERT INTO Demo.Person (PersonId,FirstName,LastName,PersonNumber, StatusCode,
                         EmailAddress, InceptionTime,YachtCount)
VALUES(1,’Fred’,’Washington’,’0000000014’,’Active’,’[email protected]’,’1/1/1959’,0),
      (2,’Barney’,’Lincoln’,’0000000032’,’Active’,’[email protected]’,’8/1/1960’,1),
      (3,’Wilma’,’Reagan’,’0000000102’,’Active’,NULL, ’1/1/1959’, 1);

Next, we create a user that will be masked, and give the user SELECT rights to the table:

CREATE USER MaskedMarauder WITHOUT LOGIN;
GRANT SELECT ON Demo.Person TO MaskedMarauder;

Then, we select the data as the sysadmin user and select it again impersonating the MaskedMarauder:

SELECT PersonId, PersonNumber, StatusCode, EmailAddress, InceptionTime, YachtCount
FROM   Demo.Person;
EXECUTE AS USER = ’MaskedMarauder’;
SELECT PersonId, PersonNumber, StatusCode, EmailAddress, InceptionTime, YachtCount
FROM   Demo.Person;
REVERT;

This returns two result sets, the first unmasked, the second masked:

PersonId    PersonNumber StatusCode EmailAddress                  InceptionTime YachtCount
----------- ------------ ---------- ----------------------------- ------------- ----------
1           0000000014   Active     [email protected]       1959-01-01    0
2           0000000032   Active     [email protected]         1960-08-01    1
3           0000000102   Active     NULL                          1959-01-01    1
PersonId    PersonNumber StatusCode EmailAddress                  InceptionTime YachtCount
----------- ------------ ---------- ----------------------------- ------------- ----------
1           xxxx         xxxx       xxxx                          1900-01-01    0
2           xxxx         xxxx       xxxx                          1900-01-01    0
3           xxxx         xxxx       NULL                          1900-01-01    0

A few initial notes. First, the data looks like data in some cases. YachtCount = 0 is the data for at least one row. Second, NULL data is still shown as NULL. Some defaults work nicely for the general case, like string being ’xxxx’. However, we probably want a bit more information in some cases (emailAddress, personNumber), and we will use a random value for YachtCount.

First we will change the emailAddress masking. Instead of default, we will use email() as the masking function:

ALTER TABLE Demo.Person ALTER COLUMN EmailAddress
    ADD MASKED WITH (Function = ’email()’);

SELECTing the data as the MaskedMarauder (a name which is not getting old to type), we see that email address gives us enough information to perhaps verify with a customer (Note that all email addresses end in .com):

PersonId    PersonNumber StatusCode EmailAddress                   InceptionTime YachtCount
----------- ------------ ---------- ------------------------------ ------------- ----------
1           xxxx         xxxx       [email protected]                  1900-01-01    0
2           xxxx         xxxx       [email protected]                  1900-01-01    0
3           xxxx         xxxx       NULL                           1900-01-01    0          

But we can do better. Next we will try the random masking function, with YachtCount. Random only works with numeric data:

ALTER TABLE Demo.Person ALTER COLUMN YachtCount
    ADD MASKED WITH (Function = ’random(1,100)’); --make the value between 1 and 100.

Viewing the masked versions of the data now shows the following nonsensical, definitely noncontroversial, data:

PersonId    PersonNumber StatusCode EmailAddress                   InceptionTime YachtCount
----------- ------------ ---------- ------------------------------ ------------- ----------
1           xxxx         xxxx       [email protected]                  1900-01-01    45
2           xxxx         xxxx       [email protected]                  1900-01-01    74
3           xxxx         xxxx       NULL                           1900-01-01    42          

Finally, we will use the custom string masking function, partial(). This lets you mask all or some of the data in a string based columns using a function: partial (number of characters to keep at the start of the string, replace with string, number of characters to keep on the end of the string). So we are going to mask the person number to show the first and last characters, and then all status codes with ’Unknown’:

ALTER TABLE Demo.Person ALTER COLUMN PersonNumber
    ADD MASKED WITH (Function = ’partial(1,"-------",2)’); --note double quotes on the text
ALTER TABLE Demo.Person ALTER COLUMN StatusCode
    ADD MASKED WITH (Function = ’partial(0,"Unknown",0)’);

Here is our final output, though the YachtCount may change:

PersonId    PersonNumber StatusCode EmailAddress                  InceptionTime YachtCount
----------- ------------ ---------- ----------------------------- ------------- ----------
1           0-------14   Unknown    [email protected]                 1900-01-01    26
2           0-------32   Unknown    [email protected]                 1900-01-01    9
3           0-------02   Unknown    NULL                          1900-01-01    15

As mentioned earlier in the section, if we want to allow a user to see the unmasked data, there are two ways. First, we can grant the UNMASK permission at the database level. This means the user will be able to see all data in the database unmasked. The second is basically using impersonation. Create a user that has UNMASK rights, and let the user see the data that way.

While it is limited in general-purpose value, dynamic data masking may come in handy in some situations where you want to hide the actual values of columns in tables most of the time, particularly when you want to show the user partial values.

Auditing SQL Server Use

Often, a client won’t care too much about security, so he or she doesn’t want to limit what a user can do in the database. However, many times, there’s a hidden subtext: “I don’t want to be restrictive, but how can we keep up with what users have done?”

Ideally though, auditing is not done as an alternative to implementing a full-blown security system, but simply to watch what users do, in case they do something they shouldn’t. To implement our Big Brother security scenario, I’ll configure a server and database audit using SQL Server’s audit features. There are other methods, like adding triggers (DML and DDL), or using Extended Events. I won’t cover them here, but they can be alternatives to using the audit feature if you cannot use audit for some reason (like database audit not being in Standard edition, or if you are using Azure SQL DB).

Image Note  In SQL Server 2008, Microsoft introduced two other new features that are interesting for watching changes to the database, but they are not of direct value for a security purpose. Change data capture is a feature that allows you to do full tracking of every change in data, and change tracking is available to other editions to capture that a change has occurred since the last time you checked the change tracking system. However, neither of those new features will tell you the user who made a given change; thus, they don’t have security applications. They are, however, amazingly useful for implementing a reporting/data warehouse system, because finding the rows that changed for ETL has always been the hardest thing.

SQL Server Audit is a tremendously cool feature that will allow you to implement detailed monitoring in a declarative manner and will make the process of meeting auditing requirements much easier than ever before. Instead of lots of code to pore through, you can just print the audits that you are enforcing, and you are done. Using SQL Server Audit, you will be able to watch what the user is doing. If you want to capture what is changing in the data, you can use the Temporal feature (SQL Server 2016), or a DML trigger, as we discussed in Chapter 8.

Image Note  As usual, there are graphical user interface (GUI) versions of everything I discuss, and I imagine that many DBAs (even some hardcore ones) will probably use the GUI for the most part, but as with everything else in this book, I want to show the syntax because it will make using the GUI easier, and if you have to apply these settings to more than one server, you will quickly learn to write scripts, or at least to use the GUI and right-click the script option.

Auditing is file based, in that you don’t do your logging to a database; rather, you specify a directory on your server (or off your server if you so desire). You will want to make sure that the directory is a very fast-access location to write to, because writing to it will be part of the transactions you execute. When auditing is turned on, each operation will be queued for audit or not executed. It doesn’t write directly to the file; for maximum performance, SQL Server Audit uses Service Broker queues under the covers, so it doesn’t have to write audit data to the file as part of each transaction. Instead, queue mechanisms make sure that the data is written asynchronously (there is a setting to force an audit trail to be written in some amount of time or synchronously if you need it to be guaranteed 100% up to date).

The audit structures consist of three basic layers of objects:

  • Server audit: Top-level object that defines where the audit file will be written to and other essential settings
  • Server audit specification: Defines the actions at the server level that will be audited
  • Database audit specification: Defines the actions at the database level that will be audited

In the following sections, we will go through the steps to define an audit specification, enable the audit, and then view the audit results.

Defining an Audit Specification

As an example, we will set up an audit on our test server/security database to watch for logins to be changed (such as a new login created or one changed/dropped), watch for the employee or manager user to execute a SELECT statement against the Products.Product table, and watch for SELECTs by anyone on Sales.Invoice. First, we define the SERVER AUDIT:

USE master;
GO
CREATE SERVER AUDIT ProSQLServerDatabaseDesign_Audit
TO FILE                      --choose your own directory, I expect most people
(     FILEPATH = N’c: emp’ --have a temp directory on their system drive
      ,MAXSIZE = 15 MB --of each file
      ,MAX_ROLLOVER_FILES = 0 --unlimited
)
WITH
(
     ON_FAILURE = SHUTDOWN --if the file cannot be written to,
                           --shut down the server
);

Image Note  The audit is created in a disabled state. You need to start it once you have added audit specifications.

The next step is to define an audit specification to set up the container to hold a list of related items to audit. This container-based approach lets you easily enable or disable auditing for the entire group of related features. Create the container by defining a SERVER AUDIT SPECIFICATION:

CREATE SERVER AUDIT SPECIFICATION ProSQLServerDatabaseDesign_Server_Audit
    FOR SERVER AUDIT ProSQLServerDatabaseDesign_Audit
    WITH (STATE = OFF); --disabled. I will enable it later

The next step is to add things to the specification to audit. There are lots of different things you can audit. You can find the list under “SQL Server Audit Action Groups and Actions” in Books Online.

At the server level, you can watch for changes to the configuration of your server. In the example, we are going to watch for server principals to change:

ALTER SERVER AUDIT SPECIFICATION ProSQLServerDatabaseDesign_Server_Audit
    ADD (SERVER_PRINCIPAL_CHANGE_GROUP);

Next, we will go through the same process for the database that we did for the server, setting up the container for the audit using the DATABASE AUDIT SPECIFICATION command for the table we created earlier. At the database level, we can look for configuration changes, but perhaps more interestingly, we can audit people accessing tables. In this example, we will monitor the one thing that is pretty difficult to do with any other method, auditing certain users SELECTing data from a table:

USE ClassicSecurityExample;
GO
CREATE DATABASE AUDIT SPECIFICATION
                   ProSQLServerDatabaseDesign_Database_Audit
    FOR SERVER AUDIT ProSQLServerDatabaseDesign_Audit
    WITH (STATE = OFF);

This time, we will audit the Employee and Manager database users use of the Products.Product table and the Products.AllProducts view (created earlier in this chapter). Here is how we add those items to the specification:

ALTER DATABASE AUDIT SPECIFICATION
    ProSQLServerDatabaseDesign_Database_Audit
    ADD (SELECT ON Products.Product BY Employee, Manager),
    ADD (SELECT ON Products.AllProducts BY Employee, Manager);

Enabling an Audit Specification

Finally, we enable the two audit specifications that we’ve just created. Note that, to enable a specification is to enable all the audits defined in that container, for example:

USE master;
GO
ALTER SERVER AUDIT ProSQLServerDatabaseDesign_Audit
    WITH (STATE = ON);
ALTER SERVER AUDIT SPECIFICATION ProSQLServerDatabaseDesign_Server_Audit
    WITH (STATE = ON);
GO
USE ClassicSecurityExample;
GO
ALTER DATABASE AUDIT SPECIFICATION ProSQLServerDatabaseDesign_Database_Audit
    WITH (STATE = ON);

Viewing the Audit Trail

Now that our audits are enabled, we can monitor the usage of the features and functionality that we’re auditing. The following code executes some actions that will be audited as a result of the specifications we’ve just created. The following script will do a few actions that will be audited by the audit objects we have set up in the previous sections:

CREATE LOGIN MrSmith WITH PASSWORD = ’A very g00d password!’;
GO
USE ClassicSecurityExample;
GO
EXECUTE AS USER = ’Manager’;
GO
SELECT *
FROM   Products.Product;
GO
SELECT  *
FROM    Products.AllProducts; --Permissions will fail
GO
REVERT
GO
EXECUTE AS USER = ’employee’;
GO
SELECT  *
FROM    Products.AllProducts; --Permissions will fail
GO
REVERT;
GO

The following query will let us view the log that was set up with the CREATE SERVER AUDIT command in the first step of the process. By executing this

SELECT event_time, succeeded,
       database_principal_name, statement
FROM sys.fn_get_audit_file (’c: emp*’, DEFAULT, DEFAULT);

we can see the different statements that were executed (and see the two statements where the permission failed):

event_time             succeeded database_principal_name  statement
---------------------- --------- ------------------------ ----------------------------------
2016-09-02 03:36:53.31 1                                  
2016-09-02 03:36:53.37 1         dbo                      CREATE LOGIN MrSmith WITH PASS... 2016-09-02 03:36:53.40 1         Manager                  SELECT *
                                                          FROM   Products.Product
2016-09-02 03:36:53.58 0         Manager                  SELECT  *
                                                          FROM    Products.AllProducts  ...
2016-09-02 03:36:53.60 0         Employee                 SELECT  *
                                                          FROM    Products.AllProducts  ...

There are lots of other pieces of information returned by the sys.fn_get_audit_file function that are very useful, especially the server principal information. Using a few of the catalog views, you can get a picture of what the audits do. Note that the query I built works only at an object (table/view/etc.) level. It could be extended if you wanted to do column-level audits.

Viewing the Audit Configuration

Finally, once you have set up the audit trail, it is often important to find out what is being audited. You can do this using several of the catalog views:

  • sys.server_audits: One row per server audit
  • sys.server_audit_specifications: Details about the audits that have been configured for this server, such as when it was started, the last time it was modified, and so on
  • sys.server_audit_specification_details: Links the objects being audited and actions being audited

The following query, using these views, will get you the definition of what is being audited at a server level:

SELECT  sas.name AS audit_specification_name,
        audit_action_name
FROM    sys.server_audits AS sa
          JOIN sys.server_audit_specifications AS sas
             ON sa.audit_guid = sas.audit_guid
          JOIN sys.server_audit_specification_details AS sasd
             ON sas.server_specification_id = sasd.server_specification_id
WHERE  sa.name = ’ProSQLServerDatabaseDesign_Audit’;

By executing this, given all of the audit stuff we had set up, the following is returned:

audit_specification_name                          audit_action_name
------------------------------------------------- ------------------------------------------
ProSQLServerDatabaseDesign_Server_Audit           SERVER_PRINCIPAL_CHANGE_GROUP

Digging deeper, to get the objects and actions, the following query will get you the database-level actions that are being audited:

SELECT audit_action_name,dp.name AS [principal],
       SCHEMA_NAME(o.schema_id) + ’.’ + o.name AS object
FROM   sys.server_audits AS sa
         JOIN sys.database_audit_specifications AS sas
             ON sa.audit_guid = sas.audit_guid
         JOIN sys.database_audit_specification_details AS sasd
             ON sas.database_specification_id = sasd.database_specification_id
         JOIN sys.database_principals AS dp
             ON dp.principal_id = sasd.audited_principal_id
         JOIN sys.objects AS o
             ON o.object_id = sasd.major_id
WHERE  sa.name = ’ProSQLServerDatabaseDesign_Audit’
  and  sasd.minor_id = 0; --need another query for column level audits

This query returns the following:

audit_action_name   principal     object
------------------- ------------- ------------------------------
SELECT              Employee      Products.Product
SELECT              Manager       Products.Product
SELECT              Employee      Products.allProducts
SELECT              Manager       Products.allProducts

Quite a few more catalog views pertain to the server and database facilities of SQL Server, certainly more than is necessary in this chapter for me to cover. The basic setup of auditing is really quite straightforward, and auditing is a nice new feature that is useful for DBAs/architects who have the need to audit the activities of their users and especially administrators.

Best Practices

Security is always one of the most important tasks to consider when implementing a system. Storing data could be worse than not storing it, if it can be used for improper purposes.

  • Secure the server first: Although this topic is outside the scope of this book, be certain that the server is secure. If a user can get access to your backup files and take them home, all the database security in the world won’t help.
  • Grant rights to roles rather than users: People come and people go, but the roles that they fulfill will usually be around for a long time. By defining common roles, you can make adding a new user easy (possibly to replace another user). Just make the user a member of the same role, rather than adding rights directly to the user. Ideally, the roles in your database are the same in development, test, and production, but the users who are members of those roles will likely change from environment to environment.
  • Use schemas to simplify security: Because you can grant rights at a schema level, you can grant rights to SELECT, INSERT, UPDATE, DELETE, and even EXECUTE everything within a schema. Even new objects that are added to the schema after the rights are granted are usable by the grantees.
  • Consider security using stored procedures as the access layer: Using stored procedures as the only way for a user to get access to the data presents the user with a nice interface to the data. If procedures are well named, you can also easily apply security to match up with the interfaces that use them. Chapter 13 will advance this concept further.
  • Don’t overuse the impersonation features: EXECUTE AS is a blessing, and it opens up a world of possibilities. It does, however, have a darker side because it can open up too much of a security hole without careful consideration of its use. Add a database with TRUSTWORTHY access set to ON, and a procedure can be written to do anything on the server, which could be exploited as a big security hole by a devious programmer.
  • Encrypt sensitive data: SQL Server has several means of encrypting data, and there are other methods available to do it off of the SQL Server box. Use it as much as necessary, but make sure not to store everything needed to decrypt the data with the encrypted data, in case someone gets hold of the data. Use Transparent Data Encryption to secure important files from exploit if they fall into the wrong hands.
  • Segregate security between environments: Security in development environments will be very different. Take care not to end up with developers having the same rights to production data as they have in development, because you use the same security script to create your development servers as you do in production. Developers generally should be given very few rights to production data, to limit access to sensitive data.

Summary

Security is a large topic, and understanding all the implications requires way more information than we covered in this chapter. I discussed some of the ways to secure your data inside a single SQL Server database. This isn’t an easy subject, but it’s far easier than dealing with securing the SQL Server. Luckily, usually in the database we’re looking to protect ourselves from ordinary users, though doing a good job of encryption is a good barricade to keep most thieves at bay.

To provide this security, we discussed a range of topics for which we need to design security into our database usage:

  • The basics of permissions-based security using SQL Server DDL statements and how this security works on SQL Server objects. This included using principals of several types: logins, users, roles, certificates, and application roles, and then applying different security criteria to the base tables and columns in a database.
  • Using coded objects to encapsulate statements that can limit the queries that users can execute. We discussed using several types of objects:
    • Stored procedures and scalar functions: Giving advanced usages to users without letting them know how they’re doing it. Included in this section was how security works across database lines and server lines.
    • Views and table-valued functions: Used to break tables up in a simple manner, either row- or column-wise. The goal is to make security seamless, such that the users feel that only this database has the data to which they have rights.
  • We looked at obfuscating data to make it hard to view the data unless you specifically try to, generally by using encryption to make the data unreadable without a key.
  • Next, we discussed using an audit trail to give
    • the user an audit of what goes on in given rows and columns in the database. This is the typical method when it comes to most data, because it’s easy to give the users access to the lists of what has changed (and why, if the application asks for a reason with certain types of changes).

Securing your servers against most common threats is that ordinary users won’t go to great lengths to hack your database because getting caught can cause loss of employment. Hence, just setting up basic security is generally good enough for all but the really sensitive/valuable data (such as a database of credit card numbers linked with names and addresses of the card holders…not a good idea).

Of course, make sure that you understand that there is a lot more to security than just security on the database. The biggest task is limiting the people/processes that can even connect to the database to the correct set of users, and that means working with the administrators of your network, web sites, and applications to make sure to limit the threat surface as much as possible.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset