Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

SqlBulkCopy Error handling / continue on error

I am trying to insert huge amount of data into SQL server. My destination table has an unique index called "Hash".

I would like to replace my SqlDataAdapter implementation with SqlBulkCopy. In SqlDataAapter there is a property called "ContinueUpdateOnError", when set to true adapter.Update(table) will insert all the rows possible and tag the error rows with RowError property.

The question is how can I use SqlBulkCopy to insert data as quickly as possible while keeping track of which rows got inserted and which rows did not (due to the unique index)?

Here is the additional information:

  1. The process is iterative, often set on a schedule to repeat.

  2. The source and destination tables can be huge, sometimes millions of rows.

  3. Even though it is possible to check for the hash values first, it requires two transactions per row (first for selecting the hash from destination table, then perform the insertion). I think in the adapter.update(table)'s case, it is faster to check for the RowError than checking for hash hits per row.

like image 398
Paladin Avatar asked Jun 17 '09 00:06

Paladin


2 Answers

Slightly different approach than already suggested; Perform the SqlBulkCopy and catch the SqlException thrown:

    Violation of PRIMARY KEY constraint 'PK_MyPK'. Cannot insert duplicate 
key in object 'dbo.MyTable'. **The duplicate key value is (17)**.

You can then remove all items from your source from ID 17, the first record that was duplicated. I'm making assumptions here that apply to my circumstances and possibly not yours; i.e. that the duplication is caused by the exact same data from a previously failed SqlBulkCopy due to SQL/Network errors during the upload.

like image 84
Oli B Avatar answered Oct 26 '22 07:10

Oli B


SqlBulkCopy, has very limited error handling facilities, by default it doesn't even check constraints.

However, its fast, really really fast.

If you want to work around the duplicate key issue, and identify which rows are duplicates in a batch. One option is:

  • start tran
  • Grab a tablockx on the table select all current "Hash" values and chuck them in a HashSet.
  • Filter out the duplicates and report.
  • Insert the data
  • commit tran

This process will work effectively if you are inserting huge sets and the size of the initial data in the table is not too huge.

Can you please expand your question to include the rest of the context of the problem.

EDIT

Now that I have some more context here is another way you can go about it:

  • Do the bulk insert into a temp table.
  • start serializable tran
  • Select all temp rows that are already in the destination table ... report on them
  • Insert the data in the temp table into the real table, performing a left join on hash and including all the new rows.
  • commit the tran

That process is very light on round trips, and considering your specs should end up being really fast;

like image 45
Sam Saffron Avatar answered Oct 26 '22 06:10

Sam Saffron