Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance of bcp/BULK INSERT vs. Table-Valued Parameters

I'm about to have to rewrite some rather old code using SQL Server's BULK INSERT command because the schema has changed, and it occurred to me that maybe I should think about switching to a stored procedure with a TVP instead, but I'm wondering what effect it might have on performance.

Some background information that might help explain why I'm asking this question:

  • The data actually comes in via a web service. The web service writes a text file to a shared folder on the database server which in turn performs a BULK INSERT. This process was originally implemented on SQL Server 2000, and at the time there was really no alternative other than chucking a few hundred INSERT statements at the server, which actually was the original process and was a performance disaster.

  • The data is bulk inserted into a permanent staging table and then merged into a much larger table (after which it is deleted from the staging table).

  • The amount of data to insert is "large", but not "huge" - usually a few hundred rows, maybe 5-10k rows tops in rare instances. Therefore my gut feeling is that BULK INSERT being a non-logged operation won't make that big a difference (but of course I'm not sure, hence the question).

  • The insertion is actually part of a much larger pipelined batch process and needs to happen many times in succession; therefore performance is critical.

The reasons I would like to replace the BULK INSERT with a TVP are:

  • Writing the text file over NetBIOS is probably already costing some time, and it's pretty gruesome from an architectural perspective.

  • I believe that the staging table can (and should) be eliminated. The main reason it's there is that the inserted data needs to be used for a couple of other updates at the same time of insertion, and it's far costlier to attempt the update from the massive production table than it is to use an almost-empty staging table. With a TVP, the parameter basically is the staging table, I can do anything I want with it before/after the main insert.

  • I could pretty much do away with dupe-checking, cleanup code, and all of the overhead associated with bulk inserts.

  • No need to worry about lock contention on the staging table or tempdb if the server gets a few of these transactions at once (we try to avoid it, but it happens).

I'm obviously going to profile this before putting anything into production, but I thought it might be a good idea to ask around first before I spend all that time, see if anybody has any stern warnings to issue about using TVPs for this purpose.

So - for anyone who's cozy enough with SQL Server 2008 to have tried or at least investigated this, what's the verdict? For inserts of, let's say, a few hundred to a few thousand rows, happening on a fairly frequent basis, do TVPs cut the mustard? Is there a significant difference in performance compared to bulk inserts?


Update: Now with 92% fewer question marks!

(AKA: Test Results)

The end result is now in production after what feels like a 36-stage deployment process. Both solutions were extensively tested:

  • Ripping out the shared-folder code and using the SqlBulkCopy class directly;
  • Switching to a Stored Procedure with TVPs.

Just so readers can get an idea of what exactly was tested, to allay any doubts as to the reliability of this data, here is a more detailed explanation of what this import process actually does:

  1. Start with a temporal data sequence that is ordinarily about 20-50 data points (although it can sometimes be up a few hundred);

  2. Do a whole bunch of crazy processing on it that's mostly independent of the database. This process is parallelized, so about 8-10 of the sequences in (1) are being processed at the same time. Each parallel process generates 3 additional sequences.

  3. Take all 3 sequences and the original sequence and combine them into a batch.

  4. Combine the batches from all 8-10 now-finished processing tasks into one big super-batch.

  5. Import it using either the BULK INSERT strategy (see next step), or TVP strategy (skip to step 8).

  6. Use the SqlBulkCopy class to dump the entire super-batch into 4 permanent staging tables.

  7. Run a Stored Procedure that (a) performs a bunch of aggregation steps on 2 of the tables, including several JOIN conditions, and then (b) performs a MERGE on 6 production tables using both the aggregated and non-aggregated data. (Finished)

    OR

  8. Generate 4 DataTable objects containing the data to be merged; 3 of them contain CLR types which unfortunately aren't properly supported by ADO.NET TVPs, so they have to be shoved in as string representations, which hurts performance a bit.

  9. Feed the TVPs to a Stored Procedure, which does essentially the same processing as (7), but directly with the received tables. (Finished)

The results were reasonably close, but the TVP approach ultimately performed better on average, even when the data exceeded 1000 rows by a small amount.

Note that this import process is run many thousands of times in succession, so it was very easy to get an average time simply by counting how many hours (yes, hours) it took to finish all of the merges.

Originally, an average merge took almost exactly 8 seconds to complete (under normal load). Removing the NetBIOS kludge and switching to SqlBulkCopy reduced the time to almost exactly 7 seconds. Switching to TVPs further reduced the time to 5.2 seconds per batch. That's a 35% improvement in throughput for a process whose running time is measured in hours - so not bad at all. It's also a ~25% improvement over SqlBulkCopy.

I am actually fairly confident that the true improvement was significantly more than this. During testing it became apparent that the final merge was no longer the critical path; instead, the Web Service that was doing all of the data processing was starting to buckle under the number of requests coming in. Neither the CPU nor the database I/O were really maxed out, and there was no significant locking activity. In some cases we were seeing a gap of a few idle seconds between successive merges. There was a slight gap, but much smaller (half a second or so) when using SqlBulkCopy. But I suppose that will become a tale for another day.

Conclusion: Table-Valued Parameters really do perform better than BULK INSERT operations for complex import+transform processes operating on mid-sized data sets.


I'd like to add one other point, just to assuage any apprehension on part of the folks who are pro-staging-tables. In a way, this entire service is one giant staging process. Every step of the process is heavily audited, so we don't need a staging table to determine why some particular merge failed (although in practice it almost never happens). All we have to do is set a debug flag in the service and it will break to the debugger or dump its data to a file instead of the database.

In other words, we already have more than enough insight into the process and don't need the safety of a staging table; the only reason we had the staging table in the first place was to avoid thrashing on all of the INSERT and UPDATE statements that we would have had to use otherwise. In the original process, the staging data only lived in the staging table for fractions of a second anyway, so it added no value in maintenance/maintainability terms.

Also note that we have not replaced every single BULK INSERT operation with TVPs. Several operations that deal with larger amounts of data and/or don't need to do anything special with the data other than throw it at the DB still use SqlBulkCopy. I am not suggesting that TVPs are a performance panacea, only that they succeeded over SqlBulkCopy in this specific instance involving several transforms between the initial staging and the final merge.

So there you have it. Point goes to TToni for finding the most relevant link, but I appreciate the other responses as well. Thanks again!

like image 800
Aaronaught Avatar asked Jan 27 '10 20:01

Aaronaught


People also ask

Which is faster BCP or bulk insert?

BCP is faster in most cases then BULK Insert.

Why is bulk insert faster?

In case of BULK INSERT, only extent allocations are logged instead of the actual data being inserted. This will provide much better performance than INSERT. The actual advantage, is to reduce the amount of data being logged in the transaction log.

What is a table-valued parameter?

Table-valued parameters are declared by using user-defined table types. You can use table-valued parameters to send multiple rows of data to a Transact-SQL statement or a routine, such as a stored procedure or function, without creating a temporary table or many parameters.

Can we pass table as a parameter to stored procedure?

Table-Valued Parameters aka TVPs are commonly used to pass a table as a parameter into stored procedures or functions. They are helpful in a way, we can use a table as an input to these routines and we can get rid of dealing more complex steps to achieve this process.


2 Answers

I don't really have experience with TVP yet, however there is an nice performance comparison chart vs. BULK INSERT in MSDN here.

They say that BULK INSERT has higher startup cost, but is faster thereafter. In a remote client scenario they draw the line at around 1000 rows (for "simple" server logic). Judging from their description I would say you should be fine with using TVP's. The performance hit - if any - is probably negligible and the architectural benefits seem very good.

Edit: On a side note you can avoid the server-local file and still use bulk copy by using the SqlBulkCopy object. Just populate a DataTable, and feed it into the "WriteToServer"-Method of an SqlBulkCopy instance. Easy to use, and very fast.

like image 158
TToni Avatar answered Sep 26 '22 19:09

TToni


The chart mentioned with regards to the link provided in @TToni's answer needs to be taken in context. I am not sure how much actual research went into those recommendations (also note that the chart seems to only be available in the 2008 and 2008 R2 versions of that documentation).

On the other hand there is this whitepaper from the SQL Server Customer Advisory Team: Maximizing Throughput with TVP

I have been using TVPs since 2009 and have found, at least in my experience, that for anything other than simple insert into a destination table with no additional logic needs (which is rarely ever the case), then TVPs are typically the better option.

I tend to avoid staging tables as data validation should be done at the app layer. By using TVPs, that is easily accommodated and the TVP Table Variable in the stored procedure is, by its very nature, a localized staging table (hence no conflict with other processes running at the same time like you get when using a real table for staging).

Regarding the testing done in the Question, I think it could be shown to be even faster than what was originally found:

  1. You should not be using a DataTable, unless your application has use for it outside of sending the values to the TVP. Using the IEnumerable<SqlDataRecord> interface is faster and uses less memory as you are not duplicating the collection in memory only to send it to the DB. I have this documented in the following places:
    • How can I insert 10 million records in the shortest time possible? (lots of extra info and links here as well)
    • Pass Dictionary<string,int> to Stored Procedure T-SQL
    • Streaming Data Into SQL Server 2008 From an Application (on SQLServerCentral.com ; free registration required)
  2. TVPs are Table Variables and as such do not maintain statistics. Meaning, they report only having 1 row to the Query Optimizer. So, in your proc, either:
    • Use statement-level recompile on any queries using the TVP for anything other than a simple SELECT: OPTION (RECOMPILE)
    • Create a local temporary table (i.e. single #) and copy the contents of the TVP into the temp table
like image 40
Solomon Rutzky Avatar answered Sep 23 '22 19:09

Solomon Rutzky