Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to do very fast inserts to SQL Server 2008

I have a project that involves recording data from a device directly into a sql table.

I do very little processing in code before writing to sql server (2008 express by the way)

typically i use the sqlhelper class's ExecuteNonQuery method and pass in a stored proc name and list of parameters that the SP expects.

This is very convenient, but i need a much faster way of doing this.

Thanks.

like image 670
Charles Okwuagwu Avatar asked May 18 '10 23:05

Charles Okwuagwu


3 Answers

ExecuteNonQuery with an INSERT statement, or even a stored procedure, will get you into thousands of inserts per second range on Express. 4000-5000/sec are easily achievable, I know this for a fact.

What usually slows down individual updates is the wait time for log flush and you need to account for that. The easiest solution is to simply batch commit. Eg. commit every 1000 inserts, or every second. This will fill up the log pages and will amortize the cost of log flush wait over all the inserts in a transaction.

With batch commits you'll probably bottleneck on disk log write performance, which there is nothing you can do about it short of changing the hardware (going raid 0 stripe on log).

If you hit earlier bottlenecks (unlikely) then you can look into batching statements, ie. send one single T-SQL batch with multiple inserts on it. But this seldom pays off.

Of course, you'll need to reduce the size of your writes to a minimum, meaning reduce the width of your table to the minimally needed columns, eliminate non-clustered indexes, eliminate unneeded constraints. If possible, use a Heap instead of a clustered index, since Heap inserts are significantly faster than clustered index ones.

There is little need to use the fast insert interface (ie. SqlBulkCopy). Using ordinary INSERTS and ExecuteNoQuery on batch commits you'll exhaust the drive sequential write throughput much faster than the need to deploy bulk insert. Bulk insert is needed on fast SAN connected machines, and you mention Express so it's probably not the case. There is a perception of the contrary out there, but is simply because people don't realize that bulk insert gives them batch commit, and its the batch commit that speeds thinks up, not the bulk insert.

As with any performance test, make sure you eliminate randomness, and preallocate the database and the log, you don't want to hit db or log growth event during test measurements or during production, that is sooo amateurish.

like image 164
Remus Rusanu Avatar answered Oct 24 '22 19:10

Remus Rusanu


bulk insert would be the fastest since it is minimally logged

.NET also has the SqlBulkCopy Class

like image 37
SQLMenace Avatar answered Oct 24 '22 21:10

SQLMenace


Here is a good way to insert a lot of records using table variables...

...but best to limit it to 1000 records at a time because table variables are "in Memory"

In this example I will insert 2 records into a table with 3 fields - CustID, Firstname, Lastname

--first create an In-Memory table variable with same structure
--you could also use a temporary table, but it would be slower

declare @MyTblVar table (CustID int, FName nvarchar(50), LName nvarchar(50))

insert into @MyTblVar values (100,'Joe','Bloggs')

insert into @MyTblVar values (101,'Mary','Smith')

Insert into MyCustomerTable

Select * from @MyTblVar
like image 23
user2696172 Avatar answered Oct 24 '22 20:10

user2696172