Estimating Memory Requirements When Migrating To Memory-Optimized Tables
Topic Status: Some information in this topic is pre-release and subject to change in future releases. Pre-release information describes new features or changes to existing features in Microsoft SQL Server 2014.
When migrating a performance-critical part of an application to In-Memory OLTP, the first step is to identify which table(s) will benefit the most.
The Migrating to In-Memory OLTP section contains information about how to migrate constructs to use In-Memory OLTP. This section describes how to estimate the amount of memory that you need to hold data for an in-memory table.
Consider the following disk-based table schema:
create table t_sql ( col1 int not null primary key, col2 int not null, col3 int not null, col4 int not null, col5 char (50) not null, col6 char (50) not null, col7 char (30) not null, col8 char (54) not null) go -- it has three non-clustered indexes as follows create nonclustered index t_sql_nci1 on t_sql(col2) create nonclustered index t_sql_nci2 on t_sql(col3) create nonclustered index t_sql_nci3 on t_sql(col4)
The following is a size computation for 10 million rows in a disk-based table:
For a row of data:
Total length of fixed length columns = 4 + 4 + 4 + 4 + 50 + 50 + 30 + 54 = 200 bytes.
Row header = 7 bytes.
Total row length = 207 bytes.
For an index row:
Index leaf rows with overhead = 23 bytes.
The size of the disk-based table depends upon the data compression setting and data fragmentation. When estimating memory needed for a memory-optimized table, use the uncompressed size of the table because data compression is not supported for memory-optimized tables.
Data pages 100% filled (no fragmentation):
Rows per page = 8060/207 = 38.
Total number of data pages = 10,000000/38 = 263157.
Total storage for data pages = 263157 * 8K = 2.15 GB.
Index pages 100% filled (no fragmentation):
Rows per page = 8060/23 = 350.
Total number of index pages = 10,000000/350 = 28571.
Total storage for index pages for single index = 234 MB.
For 4 indexes, the space will be 234*4 = 936MB.
Total space taken by the table = 2.15 + 0.936 = 3.086 GB.
Consider the following memory-optimized table schema:
create table t_hk ( col1 int not null primary key HASH with (bucket_count = 5000000), col2 int not null INDEX t1c2_index HASH WITH (bucket_count = 5000000), col3 int not null INDEX t1c3_index HASH WITH (bucket_count = 5000000), col4 int not null INDEX t1c4_index HASH WITH (bucket_count = 5000000), col5 char (50) not null col6 char (50) not null, col7 char (30) not null, col8 char (54) not null) with (memory_optimized = on) go
The following is a size computation for 10 million rows in a memory-optimized table. The total memory used by data rows is estimated as follows:
Row header = 24 bytes.
Length of columns = 200 bytes. This assumes that all char columns have no trailing blanks. A memory-optimized table does not store trailing blanks. This is like ROW compression CHAR columns.
Since there are 4 indexes, each row will allocate 32 bytes (8 byte pointer for each index).
Total length of the row = 24 + 200 + 32 = 256 bytes.
Total memory taken by data rows = 2.56 GB.
The total memory used by indexes:
A hash table is statically allocated for each hash Index.
The number of hash buckets (number of entries rounded to next power of 2) = 8388608.
The total size of hash table = 67 MB per index.
Since there are 4 indexes, the total size is 268 MB.
The total memory used by the memory-optimized table = 2.828 GB.
So, for this example, the size of the memory-optimized table is comparable to the size of a disk-based table that has no fragmentation.
If the disk-based table has 50% fragmentation, you need to discount the fragmentation when computing the size of the memory-optimized table.
Memory-optimized tables use row versioning, so additional memory will be required to store row versions. The additional size of row versions is application-dependent.