Ok I am going to do my best describing this. I have a SP which passes in XML and updates and inserts another table. This was working yesterday. All I changed today was loading the temp table with a OPENXML vs xml.nodes. I even changed it back and I am still getting this interesting issue. I have an update and insert in the same transaction. The update works and then the Insert hangs, no error no nothing... going on 9 minutes. Normally takes 10 seconds. No Blocking processes according to master.sys.sysprocesses
. The funny thing is the Select of the Insert returns no rows as they are already in the database. The update updates 72438 in
SQL Server Execution Times:
CPU time = 1359 ms, elapsed time = 7955 ms.
ROWS AFFECTED(72438)
I am out of ideas as to what could be causing my issue? Permissions I don't think so? Space I don't think so because a Error would be returned?
queries:
UPDATE [Sales].[dbo].[WeeklySummary]
SET [CountryId] = I.CountryId
,[CurrencyId] = I.CurrencyId
,[WeeklySummaryType] = @WeeklySummaryTypeId
,[WeeklyBalanceAmt] = M.WeeklyBalanceAmt + I.WeeklyBalanceAmt
,[CurrencyFactor] = I.CurrencyFactor
,[Comment] = I.Comment
,[UserStamp] = I.UserStamp
,[DateTimeStamp] = I.DateTimeStamp
FROM
[Sales].[dbo].[WeeklySummary] M
INNER JOIN
@WeeklySummaryInserts I
ON M.EntityId = I.EntityId
AND M.EntityType = I.EntityType
AND M.WeekEndingDate = I.WeekEndingDate
AND M.BalanceId = I.BalanceId
AND M.ItemType = I.ItemType
AND M.AccountType = I.AccountType
and
INSERT INTO [Sales].[dbo].[WeeklySummary]
([EntityId]
,[EntityType]
,[WeekEndingDate]
,[BalanceId]
,[CountryId]
,[CurrencyId]
,[WeeklySummaryType]
,[ItemType]
,[AccountType]
,[WeeklyBalanceAmt]
,[CurrencyFactor]
,[Comment]
,[UserStamp]
,[DateTimeStamp])
SELECT
I.[EntityId]
, I.[EntityType]
, I.[WeekEndingDate]
, I.[BalanceId]
, I.[CountryId]
, I.[CurrencyId]
, @WeeklySummaryTypeId
, I.[ItemType]
, I.[AccountType]
, I.[WeeklyBalanceAmt]
, I.[CurrencyFactor]
, I.[Comment]
, I.[UserStamp]
, I.[DateTimeStamp]
FROM
@WeeklySummaryInserts I
LEFT OUTER JOIN
[Sales].[dbo].[WeeklySummary] M
ON I.EntityId = M.EntityId
AND I.EntityType = M.EntityType
AND I.WeekEndingDate = M.WeekEndingDate
AND I.BalanceId = M.BalanceId
AND I.ItemType = M.ItemType
AND I.AccountType = M.AccountType
WHERE M.WeeklySummaryId IS NULL
UPDATE
Trying the advice here worked for a while I run the following before my stored procedure call
UPDATE STATISTICS Sales.dbo.WeeklySummary;
UPDATE STATISTICS Sales.dbo.ARSubLedger;
UPDATE STATISTICS dbo.AccountBalance;
UPDATE STATISTICS dbo.InvoiceUnposted
UPDATE STATISTICS dbo.InvoiceItemUnposted;
UPDATE STATISTICS dbo.InvoiceItemUnpostedHistory;
UPDATE STATISTICS dbo.InvoiceUnpostedHistory;
EXEC sp_recompile N'dbo.proc_ChargeRegister'
Still stalling at the Insert Statement, which again inserts 0 rows.
You can use INSERT OR UPDATE with a SELECT to populate a table with existing data extracted from other tables. This statement inserts the Name row from the Sample. Customer table into the Sample. Person table, or updates existing rows of Sample.
Insert is for adding data to the table, update is for updating data that is already in the table.
For best future query performance, it's better to do an update to keep the same extents. Delete and insert will not necessarily use the same extents. For a table of that size, it would be unlikely to do so.
There are really only a few things that can be going on, and the trick here is to eliminate them in order, from simplest to most complex.
STEP 1: Hand craft a set of XML to run that will produce exactly one insert and no updates, so you can go "back to basics" as it were and establish that the code is still doing what you expect, and the result is exactly what you expect. This may seem silly or unnecessary but you really need this reality check to start.
STEP 2: Hand craft a set of XML that will produce a medium-size set of inserts, still with no updates. Based on your experience with the routine, try to find something that will run in a 3-4 seconds. Perhaps 5000 rows. Does it continue to behave as expected?
STEP 3: Assuming steps 1 and 2 pass easily, the next most likely problem is TRANSACTION SIZE. If your update hits 74,000 rows in a single statement, then SQL Server must allocate resources to be able to roll back all 74,000 rows in the case of an abort. Generally you should assume the resources (and time) required to maintain a transaction explode exponentially as the row count goes up. So hand-craft one more set of inserts that contains 50,000 rows. You should find it takes dramatically more time. Let it finish. Is it 10 minutes, an hour? If it takes a long time but finishes, you have an issue with TRANSACTION SIZE, the server is choking trying to keep track of everything required to roll back the insert in the event of failure.
STEP 4: Determine if your entire stored procedure is operating within a single implied transaction. If it is, the matter is entirely worse, because SQL Server is tracking together everything required to roll back both the 74,000 updates and the ??? inserts in a single transaction. See this page:
http://msdn.microsoft.com/en-us/library/ms687099(v=vs.85).aspx
STEP 5: If you've got a single implicit transaction, you can either. A) Turn that off, which may help some but will not entirely fix the problem, or B) break the sproc into two separate calls, one for updates, one for inserts, so that at least the two are in separate transactions.
STEP 6: Consider "chunking". This is a technique for avoiding exploding transaction costs. Considering just the INSERT to get us started, you wrap the insert into a loop that begins and commits a transaction inside each iteration, and exits when affected rows is zero. The INSERT is modified so that you pull only the first 1000 rows from the source and insert them (that 1000 number is kind of arbitrary, you may find 5000 produces better performance, you have to experiment a bit). Once the INSERT affects zero rows, there are no more rows to handle and the loop exits.
QUICK EDIT: The "chunking" system works because the complete throughput for a large set of rows looks something like a quadratic. If you execute an INSERT that affects a huge number of rows, the total time for all rows to be handled explodes. If on the other hand you break it up and go row-by-row, the overhead of opening and committing each statement causes the total time for all rows to explode. Somewhere in the middle, when you've "chunked" out 1k rows per statement, the transaction requirements are at their minimum and the overhead of opening and committing the statement is negligible, and the total time for all rows to be handled is a minimum.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With