Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

SQL Server Memory Optimized Table - poor performance compared to temporary table

I'm trying to benchmark memory optimized tables in Microsoft SQL Server 2016 with classic temporary tables.

SQL Server version:

Microsoft SQL Server 2016 (SP2) (KB4052908) - 13.0.5026.0 (X64)  Mar 18 2018 09:11:49   
Copyright (c) Microsoft Corporation  
Developer Edition (64-bit) on Windows 10 Enterprise 10.0 <X64> (Build 17134: ) (Hypervisor) 

I'm following steps described here: https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/faster-temp-table-and-table-variable-by-using-memory-optimization?view=sql-server-ver15.

CrudTest_TempTable 1000, 100, 100
go 1000

versus

CrudTest_memopt_hash 1000, 100, 100
go 1000

What this test does?

  • 1000 inserts
  • 100 random updates
  • 100 random deletes

And this is repeated 1000 times.

First stored procedure that uses classic temporary tables takes about 6 seconds to run.

Second stored procedure takes at least 15 seconds and usually errors out:

Beginning execution loop

Msg 3998, Level 16, State 1, Line 3
Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.

Msg 701, Level 17, State 103, Procedure CrudTest_memopt_hash, Line 16 [Batch Start Line 2]
There is insufficient system memory in resource pool 'default' to run this query.

I have done following optimizations (before it was even worse):

  • hash index includes both Col1 and SpidFilter

  • doing everything in single transaction makes it works faster (however it would be nice to run without it)

  • I'm generating random ids - without it records from every iteration ended up in the same buckets

I haven't created natively compiled SP yet since my results are awful.

I have plenty of free RAM on my box and SQL Server can consume it - in different scenarios it allocates much memory but in this test case it simply errors out.

For me these results mean that memory optimized tables cannot replace temporary tables. Do you have similar results or am I doing something wrong?

The code that uses temporary tables is:

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

DROP PROCEDURE IF EXISTS CrudTest_TempTable;
GO

CREATE PROCEDURE CrudTest_TempTable 
    @InsertsCount INT, @UpdatesCount INT, @DeletesCount INT
AS
BEGIN
    SET NOCOUNT ON;
    BEGIN TRAN;

    CREATE TABLE #tempTable  
    (  
        Col1 INT NOT NULL PRIMARY KEY CLUSTERED,  
        Col2 NVARCHAR(4000),
        Col3 NVARCHAR(4000),
        Col4 DATETIME2,
        Col5 INT NOT NULL
    ); 

    DECLARE @cnt INT = 0;
    DECLARE @currDate DATETIME2 = GETDATE();


    WHILE @cnt < @InsertsCount
    BEGIN
        INSERT INTO #tempTable (Col1, Col2, Col3, Col4, Col5)
        VALUES (@cnt, 
        'sdkfjsdjfksjvnvsanlknc kcsmksmk ms mvskldamvks mv kv al kvmsdklmsdkl mal mklasdmf kamfksam kfmasdk mfksamdfksafeowa fpmsad lak',
        'msfkjweojfijm  skmcksamepi  eisjfi ojsona npsejfeji a piejfijsidjfai  spfdjsidjfkjskdja kfjsdp fiejfisjd pfjsdiafjisdjfipjsdi s dfipjaiesjfijeasifjdskjksjdja sidjf pajfiaj pfsdj pidfe',
        @currDate, 100);

        SET @cnt = @cnt + 1;
    END
    SET @cnt = 0;

    WHILE @cnt < @UpdatesCount
    BEGIN
        UPDATE #tempTable SET Col5 = 101 WHERE Col1 = cast ((rand() * @InsertsCount) as int);

        SET @cnt = @cnt + 1;
    END
    SET @cnt = 0;

    WHILE @cnt < @DeletesCount
    BEGIN
        DELETE FROM #tempTable WHERE Col1 = cast ((rand() * @InsertsCount) as int);

        SET @cnt = @cnt + 1;
    END

    COMMIT;
END
GO

The objects used in the in-memory test are :

DROP PROCEDURE IF EXISTS CrudTest_memopt_hash;
GO

DROP SECURITY POLICY IF EXISTS tempTable_memopt_hash_SpidFilter_Policy;
GO

DROP TABLE IF EXISTS tempTable_memopt_hash;
GO

DROP FUNCTION IF EXISTS fn_SpidFilter;
GO

CREATE FUNCTION fn_SpidFilter(@SpidFilter smallint)  
    RETURNS TABLE  
    WITH SCHEMABINDING , NATIVE_COMPILATION  
AS  
    RETURN  
        SELECT 1 AS fn_SpidFilter  
            WHERE @SpidFilter = @@spid;
GO

CREATE TABLE tempTable_memopt_hash  
(  
    Col1 INT NOT NULL,
    Col2 NVARCHAR(4000),
    Col3 NVARCHAR(4000),
    Col4 DATETIME2,
    Col5 INT NOT NULL,

    SpidFilter  SMALLINT    NOT NULL   DEFAULT (@@spid),  
    INDEX ix_SpidFiler NONCLUSTERED (SpidFilter), 
    INDEX ix_hash HASH (Col1, SpidFilter) WITH (BUCKET_COUNT=100000),
    CONSTRAINT CHK_SpidFilter CHECK ( SpidFilter = @@spid )
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);
GO


CREATE SECURITY POLICY tempTable_memopt_hash_SpidFilter_Policy  
    ADD FILTER PREDICATE dbo.fn_SpidFilter(SpidFilter)  
    ON dbo.tempTable_memopt_hash  
    WITH (STATE = ON); 
GO

And the stored procedure that uses them is:

CREATE PROCEDURE CrudTest_memopt_hash 
    @InsertsCount INT, @UpdatesCount INT, @DeletesCount int
AS
BEGIN

    SET NOCOUNT ON;
    BEGIN TRAN;

    DECLARE @cnt INT = 0;
    DECLARE @currDate DATETIME2 = GETDATE();

    DECLARE @IdxStart INT = CAST ((rand() * 1000) AS INT);

    WHILE @cnt < @InsertsCount
    BEGIN
        INSERT INTO tempTable_memopt_hash(Col1, Col2, Col3, Col4, Col5)
        VALUES (@IdxStart + @cnt, 
        'sdkfjsdjfksjvnvsanlknc kcsmksmk ms mvskldamvks mv kv al kvmsdklmsdkl mal mklasdmf kamfksam kfmasdk mfksamdfksafeowa fpmsad lak',
        'msfkjweojfijm  skmcksamepi  eisjfi ojsona npsejfeji a piejfijsidjfai  spfdjsidjfkjskdja kfjsdp fiejfisjd pfjsdiafjisdjfipjsdi s dfipjaiesjfijeasifjdskjksjdja sidjf pajfiaj pfsdj pidfe',
        @currDate, 100);

        SET @cnt = @cnt + 1;
    END
    SET @cnt = 0;

    WHILE @cnt < @UpdatesCount
    BEGIN
        UPDATE tempTable_memopt_hash 
        SET Col5 = 101 
        WHERE Col1 = @IdxStart + cast ((rand() * @InsertsCount) as int);

        SET @cnt = @cnt + 1;
    END
    SET @cnt = 0;

    WHILE @cnt < @DeletesCount
    BEGIN
        DELETE FROM tempTable_memopt_hash 
        WHERE Col1 = @IdxStart + cast ((rand() * @InsertsCount) as int);

        SET @cnt = @cnt + 1;
    END

    DELETE FROM tempTable_memopt_hash;
    COMMIT;
END
GO

Index stats:

table   index   total_bucket_count  empty_bucket_count  empty_bucket_percent    avg_chain_length    max_chain_length
[dbo].[tempTable_memopt_hash]   PK__tempTabl__3ED0478731BB5AF0  131072  130076  99  1   3

UPDATE

I'm including my final test cases and sql code for creating procedures, tables, etc. I've performed test on empty database.

SQL Code: https://pastebin.com/9K6SgAqZ

Test cases: https://pastebin.com/ckSTnVqA

My last run looks like this (temp table is the fastest one when it comes to tables, but I am able to achieve fastest times using memory optimized table variable):

Start CrudTest_TempTable 2019-11-18 10:45:02.983
Beginning execution loop
Batch execution completed 1000 times.
Finish CrudTest_TempTable 2019-11-18 10:45:09.537
Start CrudTest_SpidFilter_memopt_hash 2019-11-18 10:45:09.537
Beginning execution loop
Batch execution completed 1000 times.
Finish CrudTest_SpidFilter_memopt_hash 2019-11-18 10:45:27.747
Start CrudTest_memopt_hash 2019-11-18 10:45:27.747
Beginning execution loop
Batch execution completed 1000 times.
Finish CrudTest_memopt_hash 2019-11-18 10:45:46.100
Start CrudTest_tableVar 2019-11-18 10:45:46.100
Beginning execution loop
Batch execution completed 1000 times.
Finish CrudTest_tableVar 2019-11-18 10:45:47.497
like image 288
empi Avatar asked Nov 14 '19 13:11

empi


People also ask

Which is faster CTE or temp table?

Looking at SQL Profiler results from these queries (each were run 10 times and averages are below) we can see that the CTE just slightly outperforms both the temporary table and table variable queries when it comes to overall duration.

Do temp tables increase performance?

Temporary Tables are considered as regular database object, in terms of transaction handling and performance, therefore using many temporary tables in your stored procedures can lead to very poor database performance.

Are temp tables faster than tables?

The reason, temp tables are faster in loading data as they are created in the tempdb and the logging works very differently for temp tables. All the data modifications are not logged in the log file the way they are logged in the regular table, hence the operation with the Temp tables are faster.

Which is better temp table or table variable in SQL Server?

It says that temp tables are always on disk, and table variables are in memory, that is to say, the performance of table variable is better than temp table because table variable uses less IO operations than temp table.


1 Answers

IMHO, the test in OP cannot show the advantages of memory-optimized tables because the greatest advantage of these tables is that they are lock-and-latch free, this means your update/insert/delete do not take locks at all that permits concurrent changes to these tables.

But the test made does not include concurrent changes at all, the code shown make all the changes in one session.

Another observation: hash index defined on the table is wrong as you search only on one column and hash index is defined on two columns. Hash index on two columns means that hash function is applied to both arguments, but you search only on one column so hash index just cannot be used.

Do you think by using mem opt tables I can get performance improvements over temp tables or is it just for limiting IO on tempdb?

Memory-optimized tables are not supposed to substitute temporary tables, as already mentioned, you'll see the profit in highly concurrent OLTP environment, while as you guess temporary table is visible only to your session, there is no concurrency at all.

Eliminate latches and locks. All In-Memory OLTP internal data structures are latch- and lock-free. In-Memory OLTP uses a new multi-version concurrency control (MVCC) to provide transaction consistency. From a user standpoint, it behaves in a way similar to the regular SNAPSHOT transaction isolation level; however, it does not use locking under the hood. This schema allows multiple sessions to work with the same data without locking and blocking each other and improves the scalability of the system allowing fully utilize modern multi-CPU/multi-core hardware.

Cited book: Pro SQL Server Internals by Dmitri Korotkevitch

What do you think about the title "Faster temp table and table variable by using memory optimization"

I opened this article and see these examples (in the order they are in the article)

  • A. Basics of memory-optimized table variables
  • B. Scenario: Replace global tempdb ##table
  • C. Scenario: Replace session tempdb #table

A. I use table variables only in cases they contain very few rows. Why should I even take care about this few rows?

B. Replace global tempdb ##table . I just don't use them at all.

C. Replace session tempdb #table. As already mentioned, session tempdb #table is not visible to any other session, so what is the gain? That the data don't go to the disk? May be you shuold think about fastest SSD disk for your tempdb if you really have problems with tempdb? Starting with 2014 tempdb objects don't necessarily goes to disk even in case of bulk inserts, in any case I have even RCSI enabled on my databases and have no problems with tempdb.

like image 107
sepupic Avatar answered Sep 18 '22 12:09

sepupic