Now that we have looked at the theory behind the storage of memory-optimized tables, we want to get to the real fun and create some of these objects.
Before we can start creating our memory-optimized objects, we need to create a database with a filegroup designed for memory-optimized objects. This can be achieved as follows:
CREATE DATABASE InMemoryTest ON PRIMARY(NAME = [InMemoryTest_disk], FILENAME = 'C: empInMemoryTest_disk.mdf', size=100MB), FILEGROUP [InMemoryTest_inmem] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryTest_inmem], FILENAME = 'C: empInMemoryTest_inmem') LOG ON (name = [InMemoryTest_log], Filename='c: empInMemoryTest_log.ldf', size=100MB) COLLATE Latin1_General_100_BIN2;
The first main thing to note is that we have a separate filegroup dedicated to the memory-optimized objects, with the keywords CONTAINS MEMORY_OPTIMIZED_DATA
. This filegroup is used to persist the memory-optimized objects between server restarts (if required). The filestream APIs are used for this; you can observe which objects are created by navigating to the folder location and accessing the folder (with administrator permissions).
Also of note is that the database has been created with the windows BIN2 collation. This is an initial implementation limitation of In-Memory OLTP with SQL Server 2014 and limited the support for comparisons, grouping and sorting with memory-optimized objects.
It is equally possible to add a filegroup to an existing database using the ALTER DATABASE
command, specifying the filegroup and then the location for the filestream folder:
ALTER DATABASE AdventureWorks2014 ADD FILEGROUP InMemTest CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2014 ADD FILE (NAME='InMemTest', FILENAME='c: empInMemTest') TO FILEGROUP InMemTest; GO
As we can see, Microsoft has tried to keep the integration of the In-Memory OLTP engine as seamless as possible. A simple filegroup addition is all it takes to be able to start creating memory-optimized objects. We are one step closer to a faster database, let's keep going.
Now that we have a filegroup/database that is ready for high-performance workloads, we need to take a moment and consider what limitations there are with the In-Memory OLTP engine.
The following data types are supported:
tinyint
, smallint
, int
, and bigint
float
and real
smallmoney
and money
char(n)
, varchar(n)
, nchar(n)
, nvarchar(n)
, and sysname
binary(n)
and varbinary(n)
smalldatetime
, datetime
, date
, time
, and datetime2
This leaves out the LOB data types, that is, XML, max types (for example, varchar(max)
) and CLR types (remembering that this is valid for the initial implementation in SQL Server 2014). This also includes row lengths of more than 8,060 bytes. This limit is generally not recommended, as it will cause issues even in regular tables. However, with memory-optimized tables we cannot even create a table whose row length is more than 8,060 bytes.
Other than the data type limitations, the following restrictions also apply:
FOREIGN KEY
or CHECK
constraintsUNIQUE
constraints (except for PRIMARY KEY
)PRIMARY KEY
index)More specifically to the last restriction, absolutely no DDL commands can be issued on a memory-optimized table: No ALTER TABLE
, CREATE INDEX
, ALTER INDEX
, and DROP INDEX
. Effectively, once you create a memory-optimized table, it is unchangeable.
So, with the bad news out of the way, let's create our first memory-optimized table. But before we do, we must note that memory-optimized tables cannot be created in system databases (including tempdb
). If we attempt to do so, we will receive an error message:
Msg 41326, Level 16, State 1, Line 43 Memory optimized tables cannot be created in system databases.
The following code will therefore change the connection into the previously created InMemoryTest
database and then create a test memory-optimized table:
USE InMemoryTest GO CREATE TABLE InMemoryTable ( UserId INT NOT NULL PRIMARY KEY NONCLUSTERED, UserName VARCHAR(255) NOT NULL, LoginTime DATETIME2 NOT NULL, LoginCount INT NOT NULL, CONSTRAINT PK_UserId PRIMARY KEY NONCLUSTERED (UserId), INDEX NCL_IDX HASH (UserName) WITH (BUCKET_COUNT=10000) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY=SCHEMA_AND_DATA) GO
Let us consider a few of the lines of code above. Firstly, the table creation command resembles that of many other tables that can be created in SQL Server. We are not confronted with anything unfamiliar. Only in the last two lines of code do we notice anything peculiar. We have the previously mentioned hash index, with a bucket count of 10000 on the UserName
column. The last line of code has two new keywords: MEMORY_OPTIMIZED=ON
is simple enough. We are informing SQL Server that this table should be created as a memory-optimized table. However, the DURABILITY
keyword is something that we have only tangentially mentioned so far.
The durability options available to us are either SCHEMA_AND_DATA
or SCHEMA_ONLY
. These keyword options are clear enough to understand. Either the schema and the data will be recovered after a server restart, or only the schema. With SCHEMA_ONLY
, we have the ability to completely bypass writing data changes to the transaction log, because our memory-optimized table has no requirements in terms of data recovery. This has a major positive performance impact if the data in a SCHEMA_ONLY
table meets those requirements of not needing recovery.
After making our decisions about columns, indexes, and the durability of our table, we issue the CREATE TABLE
command. At this point, SQL Server receives the command and generates the table as a memory-optimized object. This results in a compiled DLL being created for the memory-optimized table including the compiled access methods for the memory-resident object. This compilation process is the main factor in the limitations listed previously, especially the reason for alterations of the table and indexes after creation.