Chapter 12. In-Memory OLTP Improvements in SQL Server 2016

When In-Memory OLTP was introduced in SQL Server 2014, many developers were initially excited. The hope of a new, ultra-high-performance data processing engine, coupled with a leading relational database engine, offered a potentially massive improvement for many SQL Server developers. However, this excitement quickly turned into mild disappointment at the number of restrictions assigned to In-Memory OLTP. Many of these restrictions prevented wide adoption of the technology and forced it into a niche set of very tight implementation scenarios. Some of these restrictions, such as lacking support for large data object types (LOBs) or the missing support for ALTER commands, dampened many people's enthusiasm for the technology.

As with previous features inside SQL Server, In-Memory OLTP has followed a similar pattern. SQL Server 2014 saw the introduction of the In-Memory OLTP Engine. With SQL Server 2016, the feature has experienced an evolution: many of the restrictions that were present in SQL Server 2014 have been removed and existing functionality has been extended. In this chapter, we will take a look at the improvements that should now make In-Memory OLTP really attractive to almost all developers out there.

This chapter will demonstrate these improvements and additions, including: Altering existing memory-optimized objects, expanded data type support, expanded functionality/reduced limitations, and integration of In-Memory OLTP into other areas of the database engine.

Ch-Ch-Changes

It's not only the legend himself, Mr. David Bowie, who could sing about changes. In SQL Server 2014, we were destined to create In-Memory OLTP objects that were unchangeable after creation. If we needed to change the structure of a memory-optimized table, we had to resort to dropping and recreating the object with the new, updated structure.

For many developers/customers, this was a deal breaker. Being able to add and remove columns or indexes is something that every SQL Server developer is used to without any such restrictions. Especially, with the advent of agile software development and similar development strategies such as continuous integration, making changes to software applications is something many developers strive towards.

Now, with SQL Server 2016, it is possible to do just that. We will be continuing this chapter using the same database as in Chapter 11, Introducing SQL Server In-Memory OLTP, creating a simple memory-optimized table:

USE master 
GO 
CREATE DATABASE InMemoryTest 
    ON  
    PRIMARY(NAME = [InMemoryTest_disk],  
               FILENAME = 'C:tempInMemoryTest_disk.mdf', size=100MB),  
    FILEGROUP [InMemoryTest_inmem] CONTAINS MEMORY_OPTIMIZED_DATA 
               (NAME = [InMemoryTest_inmem],  
               FILENAME = 'C:tempInMemoryTest_inmem') 
   LOG ON (name = [InMemoryTest_log], Filename='c:tempInMemoryTest_log.ldf', size=100MB) 
   COLLATE Latin1_General_100_BIN2; 
GO  
USE InMemoryTest 
GO  
CREATE TABLE InMemoryTable 
( 
   UserId INT NOT NULL, 
   UserName VARCHAR(20) COLLATE Latin1_General_CI_AI NOT NULL, 
   LoginTime DATETIME2 NOT NULL, 
   LoginCount INT NOT NULL, 
   CONSTRAINT PK_UserId  PRIMARY KEY NONCLUSTERED (UserId), 
   INDEX HSH_UserName HASH (UserName) WITH (BUCKET_COUNT=10000) 
    
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY=SCHEMA_AND_DATA); 
GO 
INSERT INTO dbo.InMemoryTable 
        ( UserId, UserName , LoginTime, LoginCount ) 
VALUES ( 1, 'Mickey Mouse', '2016-01-01', 1 ); 
GO 

The list of supported ALTER statements for a table in SQL Server 2016 is as follows:

  • Changing, adding, and removing a column
  • Adding and removing an index
  • Adding and removing a constraint
  • Changing the bucket count

First up, we will add a column to the demo table. Note how the DDL statement is no different from adding a column to a disk-based table:

USE InMemoryTest; 
GO  
ALTER TABLE dbo.InMemoryTable ADD NewColumn INT NULL; 
GO 

We can also remove columns. Here, the DDL has also not changed:

USE InMemoryTest; 
GO  
ALTER TABLE dbo.InMemoryTable DROP COLUMN NewColumn; 
GO 

We are also able to add indexes after a table has already been created:

USE InMemoryTest; 
GO 
ALTER TABLE dbo.InMemoryTable ADD INDEX HSH_LoginTime NONCLUSTERED HASH (LoginTime) WITH (BUCKET_COUNT = 250); 
GO  

Until now, the DDL statements looked normal. However, adding an index to a memory-optimized table is done using ALTER TABLE rather than CREATE INDEX. The same can be said for dropping or altering an index:

USE InMemoryTest; 
GO  
ALTER TABLE dbo.InMemoryTable ALTER INDEX HSH_LoginTime REBUILD WITH (BUCKET_COUNT=10000); 
GO  
ALTER TABLE dbo.InMemoryTable DROP INDEX HSH_LoginTime; 
GO 

In the previous chapter, we discovered how the indexes on a memory-optimized table are the Bw-tree linked lists, which actually provide us with an access path to the memory pages of the actual data in a memory-optimized table. As such, they are more an extension of the table definition (similar to constraints) than an index on a disk-based table can be considered. This is reflected in the requirement for issuing an ALTER TABLE command to add, remove, or alter an index on a memory-optimized table.

Altering the bucket count of an index (as shown in the previous listing) is quite interesting. We always strive to implement new code with the plan of ensuring it is fit for purpose before we deploy to production. We also know that the first implementation of code rarely survives the first encounter with production usage. Predicting the correct bucket count is like predicting the future; the more information we have, the better our predictions can be. But who has full knowledge of how a new feature will be used? It is rarely possible to get a 1:1 comparison of production into a development environment. As such, changing the bucket count is likely something that we will need to do. Later in this chapter, we will be looking at the internals of indexing memory-optimized tables and will cover bucket counts in more detail.

A further interesting addition to SQL Server 2016 is the ability to bundle multiple changes (specifically, multiple change types—columns and indexes) together in one ALTER statement:

USE InMemoryTest 
GO  
ALTER TABLE dbo.InMemoryTable  
ADD ANewColumn INT NULL, 
   AnotherColumn TINYINT NULL, 
INDEX HSH_ANewColumn NONCLUSTERED HASH (ANewColumn) WITH (BUCKET_COUNT = 250); 

Adding multiple columns was possible in previous versions of SQL Server, but the ability to add an index in the same statement is new. This has to do with how the In-Memory OLTP Engine creates and manages memory-optimized objects in general. In Chapter 11, Introducing SQL Server In-Memory OLTP, we saw that memory-optimized objects are memory-resident objects with matching access methods (in fact, compiled C code). Indexes for memory-optimized tables are part of the memory-optimized table insofar as they require the C constructs and access methods to ensure SQL Server can work with them. Because these objects require compilation into machine code, they are somewhat static, even with the new ability to issue ALTER commands to them.

To overcome this logical limitation of compiled code being unchangeable, SQL Server receives the desired changes inside the ALTER statement and proceeds to create a new version of the existing object. Upon creation of this copy, the desired changes are incorporated into the new version. If the object being changed is a table, the rows from the old version are copied to the new version. The background process then compiles the access methods (including the changed columns) and the new version is ready for use. At this point, SQL Server dereferences the old version and redirects future calls to the new version.

As the ALTER of a table requires that the entire contents of a table be copied from the old version to the new version, we must be mindful of the fact that we are doubling the memory requirements for the table for the duration of the ALTER transaction (and until the background garbage collection cleans up the old structure). Equally, ALTER commands for memory-optimized tables are offline operations. This means that the memory-optimized table is blocked for the duration of the ALTER transaction.

As such, if we are manipulating a large table, we must ensure that we have enough memory available for the operation to succeed and understand that the table will be blocked for the duration of the transaction. It may, therefore, be prudent to consider emptying extremely large tables before issuing the ALTER command, to allow the change to complete quicker.

Many ALTER statements are metadata changes only. These types of change can be processed in parallel and have a greatly reduced impact on the transaction log. When only the metadata changes need to be processed, only these metadata changes are processed through the transaction log. Coupled with parallel processing, we can expect the execution of these changes to be extremely fast.

However, parallel processing is excluded for a few operations that require more than simple metadata changes; these are:

  • Altering or adding a column to use a large object (LOB) type: nvarchar(max), varchar(max), or varbinary(max)
  • Adding or dropping a COLUMNSTORE index
  • Almost anything that affects an off-row column:
    • Causing an on-row column to move off-row
    • Causing an off-row column to move on-row
    • Creating a new off-row column
    • Exception: Lengthening an already off-row column is logged in the optimized way

As well as serial processing being forced for these operations, making changes to these data types causes a complete copy of the table being processed to be copied into the transaction log. This can cause the transaction log to fill up and also produce extra load on the storage subsystem.

The removal of restrictions on altering tables extends to altering natively compiled stored procedures. The well-known ALTER PROCEDURE command can now be used to make changes to previously created natively compiled stored procedures (the following demo code creates the stored procedure with no content to allow the ALTER statement to then be run):

USE InMemoryTest 
GO 
CREATE PROCEDURE dbo.InMemoryInsertOptimized 
    @UserId INT, 
    @UserName VARCHAR(255), 
    @LoginTime DATETIME2, 
    @LoginCount INT 
WITH NATIVE_COMPILATION, SCHEMABINDING 
AS 
BEGIN ATOMIC WITH 
( 
   TRANSACTION ISOLATION LEVEL = SNAPSHOT, 
   LANGUAGE = N'English' 
) 
   RETURN 0; 
END; 
GO 
 
ALTER PROCEDURE dbo.InMemoryInsertOptimized 
    @UserId INT, 
    @UserName VARCHAR(255), 
    @LoginTime DATETIME2, 
    @LoginCount INT 
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER 
AS 
BEGIN ATOMIC WITH 
( 
   TRANSACTION ISOLATION LEVEL = SNAPSHOT, 
   LANGUAGE = N'English' 
) 
   -- Add an Insert 
   INSERT dbo.InMemoryTable 
   (UserId, UserName, LoginTime, LoginCount) 
    VALUES 
    (@UserId, @UserName, @LoginTime, @LoginCount); 
   RETURN 0; 
END; 
GO

The following aspects of an existing natively compiled stored procedure can be changed using the ALTER PROCEDURE syntax:

  • Parameters
  • EXECUTE AS
  • TRANSACTION ISOLATION LEVEL
  • LANGUAGE
  • DATEFIRST
  • DATEFORMAT
  • DELAYED_DURABILITY

Tip

However, it is important to note that it is not possible to ALTER a natively compiled stored procedure to become non-native compiled, and the reverse is equally not supported.

If we wish to make such a change, we are required to perform a DROP PROCEDURE and CREATE PROCEDURE. This should make sense, as we are moving from the In-Memory into the normal Relational Engine (respectively). As such, we are recreating these objects in their entirety to achieve the desired change. This also means that we have to consider that any permissions assigned to such an object need to be re-assigned at (re)creation time.

During the recompile process, when an ALTER command is issued, the old version of the natively compiled stored procedure can still be executed. Upon compilation of the altered stored procedure, the old version will be destroyed and all subsequent calls of the stored procedure will use the new definition. This allows an ALTER command to be issued without causing long waiting periods, but may allow transactions to execute using potentially old code.

Feature improvements

While it is welcome that we can now alter existing objects without having to drop them first, many developers are more interested in finally being able to use data types and T-SQL syntax in the In-Memory OLTP engine that go beyond just the basics. In SQL Server 2016, we have been presented with a great deal of extra support, as is typical with version 2 of a product. In this section of the chapter, we will take a look at what areas of the database engine are now supported in the In-Memory OLTP engine.

Collations

The first major addition is the fact that both memory-optimized tables and natively compiled stored procedures support all code pages and collations that SQL Server supports. The previous limitation of only supporting a subset of collations otherwise available in SQL Server has been completely removed from the product (a newly supported collation has been used in the test table in the demo scripts in this chapter).

This improvement allows us to incorporate tables and stored procedures using the collations that we are used to or already using in our database designs.

Data types and types of data

SQL Server 2016 now supports the vast majority of data types in the In-Memory OLTP engine. In fact, the list of supported data types is now so long that it is easier to list the unsupported types instead. The data types that are currently unsupported are as follows:

  • Datetimeoffset
  • Geography
  • Geometry
  • Hierarchyid
  • Rowversion
  • Xml
  • Sql_variant

This means that (except for XML), Large Binary Objects (LOBs) are supported; this covers the max datatypes varchar(max), nvarchar(max), and varbinary(max). This is great news, as there are many cases where disk-based tables that had only one LOB column but multiple supported columns were barred from the In-Memory OLTP engine.

LOB data type support is not limited to tables and indexes; it is now also possible to use LOB data types in memory-optimized stored procedures. This will allow parameters to be passed into and out of natively compiled stored procedures.

An exhaustive list of current In-Memory OLTP limitations can be found in Microsoft Books Online: https://msdn.microsoft.com/en-us/library/dn246937.aspx

Microsoft Books Online also has indications of which commands or DDL/data types will be made available to the In-Memory OLTP engine in the next version of SQL Server. One such example is the upcoming support of Computed Columns.

What's new with indexes?

Indexing memory-optimized tables was quite restrictive in the initial version of the In-Memory OLTP engine. So far in the chapter, we have seen that there have been a number of improvements (adding, removing, and rebuilding indexes), but that is not all!

It is now possible to define non-unique indexes with NULLable key columns for memory-optimized tables. NULLability is something that is widely used and the move toward parity between memory-optimized and disk-based indexes is continued here.

Similarly, it is possible to declare UNIQUE indexes (other than the primary key) for memory-optimized tables.

We will take a deeper look at indexing later in this chapter, in the section titled Down the index rabbit-hole.

Unconstrained integrity

Referential integrity and data integrity are important foundational rules inside relational databases. The In-Memory OLTP Engine introduced an extremely limited set of constraints that could be used in SQL Server 2014. This meant that solutions had to widely forego many expectations of assigning referential and data integrity to memory-optimized tables.

With SQL Server 2016, it is now possible to implement the following referential and data integrity rules for memory-optimized tables:

  • FOREIGN KEY constraints (between memory-optimized tables).
  • CHECK constraints.
  • DEFAULT constraints.
  • UNIQUE constraints.
  • AFTER triggers for INSERT, UPDATE, and DELETE operations. Triggers for memory-optimized tables must be created using WITH NATIVE_COMPILATION so that the trigger is natively compiled at creation time.

This will now allow developers to create tables with some semblance of sane referential and data integrity rules (as they are used to with disk-based tables).

Not all operators are created equal

While the number of commands that are supported for natively compiled T-SQL in SQL Server 2016 has expanded, unfortunately a wide range of the most powerful T-SQL commands are still not available for In-Memory OLTP solutions.

The following notable operators are unsupported in SQL Server 2016:

  • LIKE
  • CASE
  • INTERSECT
  • EXCEPT
  • APPLY

Attempting to use these operators inside a natively compiled stored procedure or scalar function will result in an error message explaining that the syntax is not supported:

The operator 'LIKE' is not supported with natively compiled modules. 

We can expect these and other currently unsupported operators to be added to future releases of SQL Server.

A full list of unsupported operators can be found on the Microsoft Books Online page: https://msdn.microsoft.com/en-us/library/dn246937.aspx#Anchor_4

It is important to understand that these T-SQL limitations are for natively compiled T-SQL (stored procedures and scalar functions). You are still able to write interpreted T-SQL (non-natively compiled T-SQL) using these commands. For example, it is perfectly fine to query a memory-optimized table and issue a LIKE operator to search in a character column.

Size is everything!

When the In-Memory OLTP engine was initially introduced in SQL Server 2014 Microsoft announced that the maximum size of any memory-optimized table was 256 GB. This limit was reached via internal testing at Microsoft and named to ensure stability and reliability of the feature in production environments. The main decision here was the design of the storage subsystem, particularly around the checkpoint file pairs. There was a hard limit of 8,192 checkpoint file pairs, each capable of storing 128 MB of data. The limit of 256 GB was not a hard limit but rather a suggested maximum. The memory requirements for a memory-optimized table are dynamic. This dynamism is grounded in the memory requirements for merging data changes into the memory-optimized tables (insert, deletes, updates) and the follow-on redundancies of versioned rows and rows waiting for garbage collection.

When SQL Server 2016 was released, Microsoft quoted a maximum size of 2 TB for a memory-optimized table. Again, this was due to testing and reliability measurements inside the development team at Microsoft. The development team has re-architected the storage subsystem for the checkpoint file pairs, removing the 8,192-pair limit, effectively allowing a memory-optimized table to have no upper size limit (as long as you have memory available, the table can grow). This does  not mean that any and all tables should be memory-optimized, but it does mean that we are now able to consider using the In-Memory OLTP Engine to store any size of table that we can design.

Improvements in the In-Memory OLTP Engine

Further to the above-mentioned size limitation removal, there are a number of improvements under the hood of the In-Memory OLTP Engine that may not be immediately apparent, but can drastically improve performance and scalability.

First up is a range of improvements to the storage subsystem: not only the previously mentioned removal of the checkpoint file pair limit, but also the introduction of multi-threaded checkpointing. In SQL Server 2014, the offline checkpoint process was a single-threaded system. This thread would scan the transaction log for changes to memory-optimized tables and write those changes to the checkpoint file pairs. This meant that a potentially multi-core system would have a "busy" core for the checkpointing process. With SQL Server 2016, this checkpointing system has now been redesigned to run on multiple threads, thereby increasing throughput to the checkpoint file pairs and thus increasing overall performance and scalability of the In-Memory OLTP Engine.

Similar improvements have been made to the recovery process. When the server is performing crash recovery, or bringing a database online after a server was taken offline, the log apply operations are now also processed using multiple threads.

The theme of multi-threading or parallel processing has also been continued in the query processing portion of the In-Memory OLTP Engine. Here we can expect performance improvements for a range of queries, both interpreted and natively compiled. The query processing engine is now able to process the MERGE statement using multiple threads (as long as your server/database MAXDOP settings allow for this). This allows the query optimizer to use parallelism where it deems parallel processing to be more efficient than serial processing.

Equally, in SQL Server 2014, hash indexes could only be scanned using a single-thread/serial scan rather than a parallel scan. SQL Server 2016 has now implemented a parallel scan method to provide a significant performance gain when scanning a hash index. This helps to mitigate the inherent performance pain that is experienced when performing an index scan over a hash index.

Down the index rabbit-hole

So far in this chapter, we have seen the additions of features and functionalities that were available in disk-based objects, into the In-Memory OLTP Engine. The improvements in indexing and the ability to alter your indexing strategy with memory-optimized tables without dropping the table are particularly attractive. With that in mind, we will spend some time investigating how the indexes are treated inside the storage engine. This will include a journey through the system catalogs and Dynamic Management Views (DMVs), to allow us to see how index information can be queried.

We will now take this opportunity to explore what the alteration of a bucket count can have on the data in our demonstration table.

Let us begin by rolling back two of our changes from earlier in the chapter by dropping the hash index HSH_ANewColumn, dropping the two columns ANewColumn and AnotherColumn and finally, recreating a hash index HSH_LoginTime using an extremely irregular and sub-optimal value of 2:

USE InMemoryTest 
GO 
ALTER TABLE dbo.InMemoryTable DROP INDEX NCL_ANewColumn 
GO 
ALTER TABLE dbo.InMemoryTable DROP COLUMN ANewColumn 
GO 
ALTER TABLE dbo.InMemoryTable DROP COLUMN AnotherColumn 
GO 
ALTER TABLE dbo.InMemoryTable ADD INDEX HSH_LoginTime NONCLUSTERED HASH (LoginTime) WITH (BUCKET_COUNT=2); 
GO 

We will begin by taking a look into the system catalog views that we have used in past versions of SQL Server to inspect the table and indexes we have created so far:

USE InMemoryTest 
GO  
SELECT OBJECT_NAME(i.object_id) AS [table_name], 
    COALESCE(i.name,'--HEAP--') AS [index_name], 
    i.index_id, 
    i.type, 
    i.type_desc 
FROM sys.indexes AS i 
WHERE i.object_id = OBJECT_ID('InMemoryTable'); 
 
table_name    index_name     index_id type type_desc
------------- -------------- -------- ---- ----------
InMemoryTable --HEAP--       0        0    HEAP
InMemoryTable HSH_UserName   2        7    NONCLUSTERED HASH
InMemoryTable PK_InMemory    3        2    NONCLUSTERED
InMemoryTable HSH_LoginTime  4        7    NONCLUSTERED HASH 

The results of this first query against sys.indexes shows us that we have a HEAP (the data pages of the memory-optimized table) and three additional indexes that we have created so far. Particularly noteworthy here are the two non-clustered hash indexes that we created: HSH_UserName and HSH_LoginTime. Both appear as index type 7 and index description NONCLUSTERED HASH. This should come as no great surprise, but shows us that the old system views have been extended to include information regarding memory-optimized tables and indexes. The listing above may be executed without the WHERE clause to see that the details for both memory-optimized and disk-based tables and indexes can be queried simultaneously.

We already know that hash indexes are exclusive to memory-optimized tables. However, if we go through the demo code so far, we also know that the primary key of this table is also memory-optimized. The query referencing sys.indexes, however, only shows us that the primary key is a non-clustered index. So, we have no way of knowing if this non-clustered index is a memory-optimized or a disk-based index.

The information about whether a table is a memory-optimized or disk-based table is stored in the newly extended sys.tables catalog view:

SELECT COALESCE(i.name,'--HEAP--') AS [index_name], 
    i.index_id, 
    i.type, 
   t.is_memory_optimized, 
   t.durability, 
   t.durability_desc 
FROM sys.tables t 
    INNER JOIN sys.indexes AS i 
        ON i.object_id = t.object_id 
WHERE t.name = 'InMemoryTable' 
index_name     index_id type is_memory_optimized durability durability_desc
----------     -------- ---- ------------------- ---------- ---------
--HEAP--       0        0    1                   0          SCHEMA_AND_DATA
HSH_UserName   2        7    1                   0          SCHEMA_AND_DATA
PK_InMemory    3        2    1                   0          SCHEMA_AND_DATA
HSH_LoginTime  4        7    1                   0          SCHEMA_AND_DATA

As the results of the query show, the table and all indexes are memory-optimized (is_memory_optimized = 1). We are also able to determine which durability option has been chosen for the table.

With these two queries, we are able to take our first look at the index information for memory-optimized tables. However, we are not able to see any particular information regarding our hash index (bucket counts or chain lengths). To access that information, we leave the general system catalog views sys.indexes and sys.tables behind and venture forwards into more feature specific catalog views.

Our first port of call is the obviously named sys.hash_indexes, which displays information on any hash indexes in a database. The catalog view sys.hash_indexes has the same columns as the catalog view sys.indexes, with the exception of the column bucket_count. This column displays the bucket count for the index from the creation time / last rebuild, with the option BUCKET_COUNT supplied:

USE InMemoryTest 
GO  
SELECT hi.name AS [index_name], 
   hi.index_id, 
   hi.type, 
   hi.bucket_count 
FROM sys.hash_indexes AS hi; 
index_name    index_id    type bucket_count
------------- ----------- ---- ------------
HSH_UserName  2           7    16384
HSH_LoginTime 4           7    2 

This catalog view only provides us with information about the index structure from the time of creation; it does not provide us with any details on chain length inside each bucket (the number of rows that are assigned to each bucket via the hashing algorithm).

To access this much more important and relevant information, we must access another catalog view specifically created for the memory-optimized indexes—sys.dm_db_xtp_hash_index_stats:

SELECT COALESCE(i.name, '--HEAP--') AS [index_name], 
    i.index_id, 
    i.type, 
    ddxhis.total_bucket_count AS [total_buckets], 
    ddxhis.empty_bucket_count AS [empty_buckets], 
    ddxhis.avg_chain_length, 
    ddxhis.max_chain_length 
FROM sys.indexes AS i 
    LEFT JOIN sys.dm_db_xtp_hash_index_stats AS ddxhis 
        ON ddxhis.index_id = i.index_id 
           AND ddxhis.object_id = i.object_id 
WHERE i.object_id = OBJECT_ID('InMemoryTable'); 
index_name    index_id type total_buckets empty_buckets avg_chain_length max_chain_length

---------     -------- ---- ------------- ------------- ---------------- -----------

--HEAP--      0        0    NULL          NULL          NULL
NULL

HSH_UserName  2        7    16384         16383         1    
1

PK_UserId     3        2    NULL          NULL          NULL 
NULL

HSH_LoginTime 4        7    2             1             1 
1

By joining the system catalog view sys.dm_db_xtp_hash_index_stats to sys.indexes (or even directly to sys.hash_indexes), we are able to finally see our bucket count and our usage statistics. We see how many buckets are empty, the average chain length, and max chain length for each hash index.

We will want to monitor the statistics of our hash indexes to see if the bucket count is optimal or not. As each bucket takes up 8 bytes of memory, the higher the bucket count, the more memory we will use for the index. A higher bucket count also results in slower traversal of the index when performing index scans. So, if we note a high number of empty buckets, we may consider reducing the bucket count to improve potential scan speeds. In general, it is best to avoid performing index scans on a hash index, as the scan must first scan the hash buckets and then the chains of rows for each bucket. This has considerable overhead versus scanning a non-clustered index, which doesn't have the added complexity of the bucket structure to traverse.

Equally, if we see a high average or maximum chain length, this could indicate that there are not enough buckets available for the data coming into the index. Higher chain lengths can also impact performance when performing seeks or inserts.

The general idea of a hash index is to have just enough buckets to cover all possible unique values going into the index (or as near to this as possible), so that when a query seeks for a single hash value, the storage engine can retrieve the one (or very few) row(s) that belong to that hash value.

The recommendation from Microsoft is to aim for an empty bucket percentage of 33% or above. If the bucket count matches the number of unique values for the index, the hashing algorithm's distribution will cause around 33% of buckets to remain empty.

The recommendation from Microsoft is to aim for a chain length of less than 10, with the ideal value being 1 (one row per hash value and therefore bucket).

If you have wildly non-unique values with many duplicates, a non-clustered index would more likely be a better choice versus a hash index.

We can see how the empty buckets and chain lengths are affected by inserting a handful of rows into the test table and re-running the previous query to show how the chain length and bucket "fullness" has changed:

USE InMemoryTest 
GO  
INSERT INTO dbo.InMemoryTable 
        ( UserId, UserName , LoginTime, LoginCount ) 
VALUES 
         (2, 'Donald Duck'    , '2016-01-02', 1), 
         (3, 'Steve Jobs'     , '2016-01-03', 1), 
         (4, 'Steve Ballmer'  , '2016-01-04', 1), 
         (5, 'Bill Gates'     , '2016-01-05', 1), 
         (6, 'Ted Codd'       , '2016-01-06', 1), 
         (7, 'Brian Kernighan', '2016-01-07', 1), 
         (8, 'Dennis Ritchie' , '2016-01-08', 1); 
GO  
SELECT COALESCE(i.name, '--HEAP--') AS [index_name], 
    i.index_id, 
    i.type, 
    ddxhis.total_bucket_count AS [total_buckets], 
    ddxhis.empty_bucket_count AS [empty_buckets], 
    ddxhis.avg_chain_length, 
    ddxhis.max_chain_length 
FROM sys.indexes AS i 
    LEFT JOIN sys.dm_db_xtp_hash_index_stats AS ddxhis 
        ON ddxhis.index_id = i.index_id 
           AND ddxhis.object_id = i.object_id 
WHERE i.object_id = OBJECT_ID('InMemoryTable'); 
 

Down the index rabbit-hole

Here we can see that the two hash indexes now have slightly different information in them. We now have a total of eight rows in the table (and the indexes). HSH_UserName has 16376 buckets with a hash function applied to the UserName column. As we inserted seven new rows, each with a unique value for UserName, they will all be stored in an empty bucket. This leaves the average and max chain lengths at 1. The data inserted into LoginTime was also unique for each of the 7 rows. However, there are only 2 buckets assigned to the HSH_LoginTime. This results in the 7 rows hashing to one of two possible values and being placed in one of the two available buckets. The average and max chain lengths are then no longer 1.

This example is very simplified, but would allow us to recognize that the implementation of the hash index on LoginTime requires attention. Either the bucket count needs raising or the choice of a hash index is possibly incorrect. Equally, the hash index on UserName is providing ideal chain lengths, but has an excessive empty bucket count. The index is using more memory than is necessary and may need the bucket count reducing to release memory for other memory-optimized objects. Now that we know that the bucket counts and chain lengths can affect the amount of memory required to store our memory-optimized tables and indexes, we should also take a quick look at how the two hash indexes in the example can differ:

USE InMemoryTest 
GO  
SELECT COALESCE(i.name, '--HEAP--') AS [index_name], 
    i.index_id, 
    i.type, 
    c.allocated_bytes, 
    c.used_bytes 
FROM sys.dm_db_xtp_memory_consumers c 
    JOIN sys.memory_optimized_tables_internal_attributes a 
        ON a.object_id = c.object_id 
           AND a.xtp_object_id = c.xtp_object_id 
    LEFT JOIN sys.indexes i 
        ON c.object_id = i.object_id 
           AND c.index_id = i.index_id 
WHERE c.object_id = OBJECT_ID('InMemoryTable') 
      AND a.type = 1 
ORDER BY i.index_id; 
 
index_name    index_id type allocated_bytes used_bytes

----------    -------- ---- --------------- -------------

--HEAP--      NULL     NULL 131072          696

HSH_UserName  2        7    131072          131072

PK_UserId     3        2    196608          1368

HSH_LoginTime 4        7    16              16

The results show an interesting fact. The HSH_LoginTime index was created with two buckets and has 16 bytes allocated and used. This makes sense when we think about how each bucket takes up 8 bytes of memory. However, HSH_UserName takes up 128 MB of memory, although the index has 10,000 buckets. We would expect ~78 MB (10,000 * 8 bytes = 80,000 bytes); however, the memory allocation follows base—2 size allocations. As the size is larger than 64 MB, the next largest size in base—2 is 128 therefore 128 MB are allocated. We can test this theory by altering the number of buckets to a value low enough for the size to be below 64 (the next step down the base—2 scale):

USE InMemoryTest 
GO  
ALTER TABLE dbo.InMemoryTable ALTER INDEX HSH_UserName REBUILD WITH (BUCKET_COUNT=8000); 
GO 
SELECT COALESCE(i.name, '--HEAP--') AS [index_name], 
    i.index_id, 
    i.type, 
    c.allocated_bytes, 
    c.used_bytes 
FROM sys.dm_db_xtp_memory_consumers c 
    JOIN sys.memory_optimized_tables_internal_attributes a 
        ON a.object_id = c.object_id 
           AND a.xtp_object_id = c.xtp_object_id 
    LEFT JOIN sys.indexes i 
        ON c.object_id = i.object_id 
           AND c.index_id = i.index_id 
WHERE c.object_id = OBJECT_ID('InMemoryTable') 
      AND a.type = 1 
ORDER BY i.index_id; 
 
index_name    index_id type allocated_bytes used_bytes

----------    -------- ---- --------------- -----------

--HEAP--      NULL     NULL 131072          760

HSH_UserName  2        7    65536           65536

PK_InMemory   3        2    196608          1368

HSH_LoginTime 4        7    16              16 

We now see that the HSH_UserName index has 64 MB allocated, although 8,000 buckets equate to ~62.5 MB. This gives us further information regarding memory use for indexing memory-optimized tables and that we have a certain amount of overhead for the storage of indexes in relation to the number of buckets we assign to a hash index.

Large object support

We will now take a look at exactly what LOB data types are supported in SQL Server 2016 and what aspects of these data types need to be considered when using them with memory-optimized objects.

Let's begin by adding a LOB column to our test table:

USE InMemoryTest 
GO  
ALTER TABLE dbo.InMemoryTable Add NewColumnMax VARCHAR(MAX) NULL 
GO 

This follows the same scheme as the previous code examples; adding a LOB column is no different with a memory-optimized table than a disk-based table. At least on the surface!

Let's take another look into our index metadata to see how exactly LOB columns are handled by the storage engine:

USE InMemoryTest 
GO  
SELECT COALESCE(i.name, '--HEAP--') AS [index_name], 
    c.allocated_bytes AS allocated, 
    c.used_bytes AS used, 
    c.memory_consumer_desc AS memory_consumer, 
    a.type_desc 
FROM sys.dm_db_xtp_memory_consumers c 
    JOIN sys.memory_optimized_tables_internal_attributes a 
        ON a.object_id = c.object_id 
           AND a.xtp_object_id = c.xtp_object_id 
    LEFT JOIN sys.indexes i 
        ON c.object_id = i.object_id 
           AND c.index_id = i.index_id 
WHERE c.object_id = OBJECT_ID('InMemoryTable') 
      AND i.index_id IS NULL; 
GO 
index_name allocated used memory_consumer    type_desc

---------- --------- ---- ---------------    --------------------

--HEAP--   131072    760  Table Heap         USER_TABLE

--HEAP--   0         0    LOB Page Allocator INTERNAL OFF-ROW DATA TABLE

--HEAP--   0         0    Table heap         INTERNAL OFF-ROW DATA TABLE

There are a number of things to note here:

  • We are filtering on the table/heap only (AND i.index_id IS NULL). This will allow us to concentrate on the base table only.
  • Two additional columns have been added, showing what type of memory consumer each row represents.
  • We now see two additional rows displayed.

The preceding listing returns two additional rows, one marked LOB Page Allocator and one marked Table heap, and both with an internal table attribute of Internal off row data table. These are references to how the LOB data is stored for memory-optimized tables.

LOB data is not stored along with the other data pages for a memory-optimized table. Rather, the LOB data is stored in a separate data structure that is optimized for LOB style data. The LOB Page Allocator stores the actual LOB data, while the Table heap stores the references to the LOB data and references back to the original data pages. This arrangement has been designed so that the row-versioning mechanism, which is the foundation of the multi-version concurrency control inside the In-Memory OLTP engine (described in Chapter 11, Introducing SQL Server In-Memory OLTP) doesn't need to keep multiple versions of the LOB data in the same way it must do for the regular columns. By decoupling the two data storage classes, SQL Server can be much more efficient with the storage of these two quite different data types. In particular, LOB data has been regarded as a data type that is likely to be modified less often compared to non-LOB columns. As such, processing the LOB data separately will greatly reduce the memory resource usage and this reduces the overhead of storing LOB data in a memory-optimized table while still affording the ability of processing the data types through one interface.

At the moment, the LOB column is empty (allocated = 0 and used = 0); if we now run a simple update to copy the UserName column data into the LOB column, we can run the same query and investigate how the data is stored in the LOB data structures:

USE InMemoryTest 
GO  
UPDATE dbo.InMemoryTable 
SET NewColumnMax = UserName 
GO 
SELECT COALESCE(i.name, '--HEAP--') AS [index_name], 
    c.allocated_bytes AS allocated, 
    c.used_bytes AS used, 
    c.memory_consumer_desc AS memory_consumer, 
    a.type_desc 
FROM sys.dm_db_xtp_memory_consumers c 
    JOIN sys.memory_optimized_tables_internal_attributes a 
        ON a.object_id = c.object_id 
           AND a.xtp_object_id = c.xtp_object_id 
    LEFT JOIN sys.indexes i 
        ON c.object_id = i.object_id 
           AND c.index_id = i.index_id 
WHERE c.object_id = OBJECT_ID('InMemoryTable') 
      AND i.index_id IS NULL; 
GO 
index_name allocated used memory_consumer    type_desc

---------- --------- ---- -----------------  -----------------------

--HEAP--   131072    1520 Table Heap         USER_TABLE

--HEAP--   131072    376  LOB Page Allocator INTERNAL OFF-ROW DATA TABLE

--HEAP--   65536     512  Table heap         INTERNAL OFF-ROW DATA TABLE

We are observing a similar behavior as earlier in the chapter, where allocation is occurring in base-2 steps. The LOB Page Allocator has allocated 128 MB and the supporting Table heap has allocated 64 MB, although both are using much less space. This is a pure efficiency step to avoid having to perform unnecessary additional allocations too often. However, we notice a behavior that could cause memory resource contention if large amounts of LOB data were to be processed in this way.

Tip

If you run the index metadata queries shortly after modifying the table structure, you may notice that the memory allocations of the table are larger than shown in the example results here. This has to do with how ALTER statements are processed by the In-Memory OLTP Engine. Earlier in the chapter, this mechanism was described and it was stated that an ALTER makes a copy of the original table, incorporating the desired changes into the new copy. This leads to a doubling of the memory allocation until the ghost record cleanup can remove the old copy of the data from memory.

When SQL Server processes LOB data in the In-Memory OLTP Engine, the decision on whether or not to push the data from in-row to off-row storage is made according to the table schema. This is a different mechanism than the disk-based storage engine, which bases the decision on the data being processed at execution time. Disk-based storage only places data off-row when it won't fit on a data page, while memory-optimized tables place data off-row when the table schema describes a column as being either a max data type or that the row can store more than 8,060 bytes when completely full. When LOB data is stored off-row for a memory-optimized table, there is a significant overhead associated with that storage. A reference must be stored for the in-row data (the base memory-optimized table), the off-row data (the LOB data itself in the LOB Page Allocator), and the leaf level of the range index (the intermediary heap referencing between the on-row and off-row data).

In addition to this, the actual data must be stored in the LOB Page Allocator. This overhead can go beyond 50 bytes per row. This additional overhead is created and must be maintained for each and every LOB column that a memory-optimized table has. The more LOB columns, the more overhead and bloated referencing required. As you may see, writing an additional 50+ bytes for each row with a LOB column (or multiples of this) can quickly cause significant resource and performance issues.

Storage differences of on-row and off-row data

Let's now take a look at the difference in storage and processing between two tables with variable length character columns. One table has varchar(5) columns, which will be small enough to fit in the in-row storage. The other table will have a series of varchar(max) columns, which will automatically be stored in the off-row storage. These tables will be created and filled with 100,000 rows each to demonstrate both the storage and also performance differences between the two storage types:

USE InMemoryTest 
GO 
DROP TABLE IF EXISTS dbo.InMemoryTableMax 
DROP TABLE IF EXISTS dbo.InMemoryTableNotMax 
GO 
 
CREATE TABLE dbo.InMemoryTableMax 
( 
   UserId INT NOT NULL IDENTITY (1,1), 
   MaxCol1 VARCHAR(max) COLLATE Latin1_General_CI_AI NOT NULL , 
   MaxCol2 VARCHAR(max) COLLATE Latin1_General_CI_AI NOT NULL , 
   MaxCol3 VARCHAR(max) COLLATE Latin1_General_CI_AI NOT NULL , 
   MaxCol4 VARCHAR(max) COLLATE Latin1_General_CI_AI NOT NULL , 
   MaxCol5 VARCHAR(max) COLLATE Latin1_General_CI_AI NOT NULL , 
   CONSTRAINT PK_InMemoryTableMax  PRIMARY KEY NONCLUSTERED (UserId), 
 
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY=SCHEMA_AND_DATA) 
GO 
 
CREATE TABLE dbo.InMemoryTableNotMax 
( 
   UserId INT NOT NULL IDENTITY (1,1), 
   Col1 VARCHAR(5) COLLATE Latin1_General_CI_AI NOT NULL , 
   Col2 VARCHAR(5) COLLATE Latin1_General_CI_AI NOT NULL , 
   Col3 VARCHAR(5) COLLATE Latin1_General_CI_AI NOT NULL , 
   Col4 VARCHAR(5) COLLATE Latin1_General_CI_AI NOT NULL , 
   Col5 VARCHAR(5) COLLATE Latin1_General_CI_AI NOT NULL , 
   CONSTRAINT PK_InMemoryTableNotMax  PRIMARY KEY NONCLUSTERED (UserId), 
 
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY=SCHEMA_AND_DATA) 
GO 

The only difference between these two tables are the column data types; one uses the varchar(max), while the other uses a varchar(5) data type. We are leaving the tables with only a primary key and no other indexes, because we only want to investigate the on-row and off-row storage differences.

If we now run our memory consumers query from earlier in this section, we can investigate how the LOB Page Allocator and matching Table heap objects are created for each table:

SELECT OBJECT_NAME(c.object_id) AS [table_name], 
    c.allocated_bytes AS allocated, 
    c.used_bytes AS used, 
    c.memory_consumer_desc AS memory_consumer, 
    a.type_desc 
FROM sys.dm_db_xtp_memory_consumers c 
    JOIN sys.memory_optimized_tables_internal_attributes a 
        ON a.object_id = c.object_id 
           AND a.xtp_object_id = c.xtp_object_id 
    LEFT JOIN sys.indexes i 
        ON c.object_id = i.object_id 
           AND c.index_id = i.index_id 
WHERE 
( 
    c.object_id = OBJECT_ID('InMemoryTableNotMax') 
    OR c.object_id = OBJECT_ID('InMemoryTableMax') 
) 
AND i.index_id IS NULL  

Storage differences of on-row and off-row data

The results of the memory consumer query show that we have a single Table heap for the table InMemoryTableNotMax (the table with the varchar(5) columns) and we have several internal off-row data tables for the InMemoryTableMax table. In fact, we have a LOB Page Allocator and a matching Table heap for each varchar(max) column in the table.

We then fill each table with 100,000 rows of data while running a basic timing comparison to see what performance overhead the LOB Page Allocator and Table heap maintenance causes:

SET NOCOUNT ON  
GO 
 
SET STATISTICS TIME ON 
GO 
 
INSERT INTO dbo.InMemoryTableMax 
        ( MaxCol1 , 
          MaxCol2 , 
          MaxCol3 , 
          MaxCol4 , 
          MaxCol5 
        ) 
SELECT TOP 100000 
    'Col1', 
    'Col2', 
    'Col3', 
    'Col4', 
    'Col5' 
FROM sys.columns a 
    CROSS JOIN sys.columns; 
GO 
 
INSERT INTO dbo.InMemoryTableNotMax 
        ( Col1 , 
          Col2 , 
          Col3 , 
          Col4 , 
          Col5 
        ) 
SELECT TOP 100000 
    'Col1', 
    'Col2', 
    'Col3', 
    'Col4', 
    'Col5' 
FROM sys.columns a 
    CROSS JOIN sys.columns 
GO 
 
SET STATISTICS TIME OFF 
GO  
SQL Server Execution Times: 
   CPU time = 1797 ms,  elapsed time = 1821 ms. 
 
SQL Server Execution Times: 
   CPU time = 281 ms,  elapsed time = 382 ms. 

The results at the end of this listing show that the elapsed time for inserting 100,000 rows into the table with varchar(max) columns took roughly five times longer than inserting the same rows into the table with varchar(5) columns. This timing difference is down to the overhead of inserting the data into the off-row storage (the LOB Page Allocator and matching Heap table).

If we also take another look into the memory consumption statistics, we can see that there is also a significant difference in memory consumption between the two tables:

SELECT OBJECT_NAME(c.object_id) AS [table_name], 
    SUM(c.allocated_bytes) / 1024. AS allocated, 
    SUM(c.used_bytes) / 1024. AS used 
FROM sys.dm_db_xtp_memory_consumers c 
    JOIN sys.memory_optimized_tables_internal_attributes a 
        ON a.object_id = c.object_id 
           AND a.xtp_object_id = c.xtp_object_id 
    LEFT JOIN sys.indexes i 
        ON c.object_id = i.object_id 
           AND c.index_id = i.index_id 
WHERE 
( 
    c.object_id = OBJECT_ID('InMemoryTableNotMax') 
    OR c.object_id = OBJECT_ID('InMemoryTableMax') 
) 
AND i.index_id IS NULL 
GROUP BY c.object_id; 
table_name          allocated    used 
------------------- ------------ ------------ 
InMemoryTableMax    59392.000000 58593.750000 
InMemoryTableNotMax 7104.000000  7031.250000 

We can see that we should very carefully consider the use of LOB data types for memory-optimized tables. If LOB data types are required, then the performance and resource consumption overhead should be noted, especially where high performance is the driving factor for adoption of a memory-optimized solution.

Cross-feature support

We have seen that the In-Memory OLTP Engine has increasingly added support for features outside the engine itself, allowing interoperability between disk-based and memory-optimized objects with increasing parity with each new release. However, developers need to also be aware of how the feature support of In-Memory OLTP is also expanding into the more administrator focused areas of SQL Server.

Security

The first important feature we will discuss under the subtitle cross-feature support is the ability to use Transparent Data Encryption (TDE) on memory-optimized objects. TDE, as the name suggests, allows a database administrator to apply an encryption mechanism to an entire database. This encryption will protect all the data stored inside the database. The encryption is designed in such a way that should the database fall into the wrong hands (the data and log files are copied or a backup is misplaced), then the data inside those files is completely encrypted and useless without the encryption key(s) and certificate(s), which are stored separately from the database files themselves. The scope of this chapter/book does not allow for a full excursion into the inner workings of TDE. However, it is enough to know that should you wish to use this simple, but effective encryption solution, you are now able to also use memory-optimized objects in such a database without any restrictions.

For developers wishing to implement In-Memory OLTP using the newest security features introduced in SQL Server 2016—Always Encrypted, Row-Level Security, and Dynamic Data Masking (all explained in detail in Chapter 8, Tightening the Security)—can also rest assured that Microsoft was paying attention during the design of these features and memory-optimized tables can also take advantage of these improved security features.

Programmability

We are also able to take full advantage of the extremely useful Query Store feature, which was also introduced in SQL Server 2016 and has a chapter dedicated to it: Chapter 9, Query Store. There is no additional configuration required to include memory-optimized tables or natively compiled code; when Query Store is activated, the memory-optimized objects will be collected in the Query Store analysis along with disk-based objects and queries.

The focus on feature inclusion continues to also cover Temporal Tables, a chapter explaining what Temporal Tables are and how they can be implemented is available: Chapter 7, Temporal Tables. Similar to the other cross-feature support items, Temporal Tables allows for a transparent implementation of memory-optimized tables versus disk-based tables. There are no special requirements to accommodate memory-optimized tables with this feature.

High availability

High availability support is of great importance for systems that are processing large volumes of data requiring the performance of the In-Memory OLTP Engine.

SQL Server 2016 fully supports the use of memory-optimized objects in conjunction with Always On Availability Groups. Any memory-optimized table will be replicated to the secondary server(s) in the same way that disk-based tables are. For SCHEMA_ONLY tables, the table schema will be replicated to the secondary server(s), but not the data (because the data is never logged for schema only tables). As such, if SCHEMA_ONLY tables are replicated, a mechanism to refill these tables must be included for a failover scenario. If the memory-optimized tables are defined as SCHEMA_AND_DATA, the data will be replicated to the secondary server(s) and will be available to any readable secondary connections that may be reading from the secondary server(s).

Failover Cluster Instances are also fully supported for high availability with the In-Memory OLTP Engine. The memory-optimized tables behave in exactly the same way as with a standalone instance, meaning that tables defined with the SCHEMA_AND_DATA property will be instantiated at server startup, which means that they are created and filled using the checkpoint file pairs at startup. Depending on how large these tables are, this initialization can take some time. This is not unique to Failover Cluster Instances, but rather for any SQL Server that uses the In-Memory OLTP Engine.

It is now also possible to use replication in conjunction with memory-optimized tables. The initial implementation inside SQL Server 2016 allows for a transactional or snapshot replication publisher to publish tables to a subscriber and for the subscriber tables to be created as memory-optimized tables. This means that the memory-optimized tables are in fact available on the subscriber side of the replication topology. Using this topology removes the bottleneck of locking and blocking on the subscriber side and would allow the subscriber to easily keep up with the replication data flow.

Tools and wizards

As with many of the features in SQL Server, In-Memory OLTP also has some useful tools to assist in working with the technology.

The main desire for many developers is to take their existing database solutions and to convert them to memory-optimized solutions (ideally automatically). Microsoft has provided a few options to investigate a database and deliver feedback on the database design with suggestions where memory-optimized solutions would be possible and feasible.

We can begin with the simplest of these tools: the In-Memory OLTP Migration Checklist. This is a wizard provided inside SQL Server Management Studio (SSMS), which will perform a variety of checks against an entire database and provide a set of checklists/reports that inform exactly what objects inside a database are candidates for conversion into memory-optimized objects. An overview of the work required is also provided (for example, supported and unsupported data types). The checklist wizard can be found by right-clicking a database and choosing Tasks and then Generate In-Memory OLTP Migration Checklists:

Tools and wizards

Figure 12.1: Generating In-Memory OLTP Migration Checklists

This will then launch the wizard for the chosen database, which guides us through the process of evaluating a database for compatibility with the In-Memory OLTP Engine. After choosing the objects to be processed by the wizard and where to store the resulting report, the wizard can be started.

The wizard then runs through all the chosen objects in the database; depending on the number of objects, this can take some time:

Tools and wizards

Figure 12.2: In-Memory OLTP Migration Checklist progress

Upon completion, the wizard can be closed and the reports found in the location specified before running the analysis. The resulting report files are HTML files, one per database object that has been analyzed, showing what changes (if any) are required to make the object compatible with the In-Memory OLTP Engine:

Tools and wizards

Figure 12.3: Example Migration Checklist Report for a table

As we can see in Figure 12.3, the report files generated by the wizard provide an overview of what properties of an object are or are not supported by the In-Memory OLTP Engine. Where appropriate, a link is supplied that loads a Books Online support page, describing the issue found and possible solutions.

The next tool to assist in evaluating a database for a possible migration of objects into the In-Memory OLTP Engine is a standard report delivered with SSMS, the Transaction Performance Analysis Overview. This report requires no pre-configuration of a database, except for the database being hosted on a SQL Server 2016 instance. The report collects execution statistics for queries in the database  and shows which tables or stored procedures are possible candidates for a migration to the In-Memory OLTP Engine. This means that the report will only have meaningful data in it after the database has been in use for a few hours or days:

Tools and wizards

Figure 12.4: Transaction Performance Analysis Overview report

The report loads inside SSMS and offers the choice of table or stored procedure analysis. Both options deliver a similar report, displaying the objects that have been recently accessed. These objects are plotted on a graph showing the potential performance impact of a migration to the In-Memory OLTP Engine versus the estimated work required to make the change. The best candidates are objects plotted toward the top-right of the graph (high impact, minimal migration work).

These two analysis options allow us to get an idea of how much impact and how much work would be involved in migrating to the In-Memory OLTP Engine. The results should be considered a basic indication and by no means offer a guarantee of accuracy:

Tools and wizards

Figure 12.5: Transaction Performance Analysis Overview result

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset