Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Parallelizing massive inserts in SQL Server from C# (for better time performance)

Problem statement : How to parallelize inserts in SQL Server (2008)

I am performing massive numeric computation for scientific research in C# multithreaded workers that basically do one thing : Test thousands of possible configurations (matrix combinations) through a time period (in days) and store the results into an SQL Server Database.

If i store the results one by one into DB (~300.000 lines per computing session * 100's of sessions), one after the other, I end up waiting for hours for the storing process to end.

The database design is very simple :

  • Combination Sets
    CS_ID1, Value A1, Value B1, Value C1
    CS_ID2, Value A2, Value B2, Value C2
    .........

  • Results per Day
    CS_ID1, Day1,Result 1
    CS_ID1, Day2,Result 2
    CS_ID1, Day3,Result 3
    .........

    .........
    CS_ID2, Day1, Result N
    CS_ID2, Day2, Result N+1
    CS_ID2, Day3, Result N+2

Each "Combination Set" is tested against sample days and its per-day results are processed in a single C# thread, where a LINQ/SQL query is generated and sent to DB just before the end of the thread. Except combination set IDs sequences, there is NO logical relation between Results. This is very important : This is why I thought of parallelizing the insert stuff as it basically amounts to a bulk dump of result blocks

Another detail that could be important is that it is possible to determine beforehand how much rows will be inserted into the Database (per block and in total). This probably could help organize table spaces, split them through pages, pre-fix id ranges in order to store blocks simultaneously, or something like that (No, i'm not "high" or something :-) )

I welcome any kind of suggestions in order to make this insert time as short as possible.

Please take into account that I am a C# developer, with very basic SQL Server knowledge and not very familiar with deep technical DBA concepts (I saw that Locking tweaks are VERY numerous, that there are multithreaded and asynchronous capabilities, too, but I have to admit I am lost alone in the forest :-) )

I Have 12 CPU Cores available, and 24Go RAM


EDIT: Tiebreaker
I welcome any clever suggestion on monitoring time for the whole process : From C# threads inception/end to detailed SQl server insert reports (What happens when, how, and where).
I tried logging whith NLog but it drastically biases the processing time so I am looking for some smart workarounds that are pretty seamless with minimum impact. Same for the SQL server part : I know there are a couple of Logs and monitoring SP's available. I did not figure out yet which ones suit my situation.

like image 353
Mehdi LAMRANI Avatar asked Nov 16 '10 17:11

Mehdi LAMRANI


1 Answers

300k inserts is a matter of seconds, at worst minutes, not hours. You must be doing it wrong. The ETL SSIS world record back in 2008 was at 2.36 TB/hour, 300k records is nothing.

The basic rules of thumb are:

  • batch commit. this is the most important thing. Don't INSERT a row, then INSERT a row, then INSERT a row at nauseam, each insert int its own transaction. Your program has to wait for the log (LDF) to flush after each statement int his case, and will be slow. Very slow. Instead start a transaction, then insert a batch of rows, then commit the transaction:

Pseudocode:

do
  {
  using (TransactionScope scope = new TransactionScope(
     Required, new TransactionOptions() {IsolationLevel = ReadCommitted))
  {
    for (batchsize)
    {
      ExecuteNonQuery ("Insert ...")
    }
    scope.Complete ();
  }
} while (!finished);
  • if possible, use SqlBulkCopy

The first option alone will get you above 3000 inserts per second (~2 minutes for 300k). Second option should get you into tens of thousands per second range. If you need more, there are more advanced tricks:

  • use heaps instead of b-trees (no clustered index)
  • disable secondary indexes
  • affinitize clients to soft NUMA nodes and push into locked tables per client conenction, then switch them all in using partition switching at the end. This is for Really high end, millions of rows per second.

I suggest you start with the basics of the basics: batch commits.

like image 197
Remus Rusanu Avatar answered Mar 01 '23 14:03

Remus Rusanu