Let's say I have a simple stored procedure that looks like this (note: this is just an example, not a practical procedure):
CREATE PROCEDURE incrementCounter AS
DECLARE @current int
SET @current = (select CounterColumn from MyTable) + 1
UPDATE
MyTable
SET
CounterColumn = current
GO
We're assuming I have a table called 'myTable' that contains one row, with the 'CounterColumn' containing our current count.
Can this stored procedure be executed multiple times, at the same time?
i.e. is this possible:
I call 'incrementCounter' twice. Call A gets to the point where it sets the 'current' variable (let's say it is 5). Call B gets to the point where it sets the 'current' variable (which would also be 5). Call A finishes executing, then Call B finishes. In the end, the table should contain the value of 6, but instead contains 5 due to the overlap of execution
This is for SQL Server.
Each statement is atomic, but if you want the stored procedure to be atomic (or any sequence of statements in general), you need to explicitly surround the statements with
BEGIN TRANSACTION
Statement ...
Statement ...
COMMIT TRANSACTION
(It's common to use BEGIN TRAN and END TRAN for short.)
Of course there are lots of ways to get into lock trouble depending what else is going on at the same time, so you may need a strategy for dealing with failed transactions. (A complete discussion of all the circumstances that might result in locks, no matter how you contrive this particular SP, is beyond the scope of the question.) But they will still be resubmittable because of the atomicity. And in my experience you'll probably be fine, without knowing about your transaction volumes and the other activities on the database. Excuse me for stating the obvious.
Contrary to a popular misconception, this will work in your case with default transaction level settings.
In addition to placing the code between a BEGIN TRANSACTION
and END TRANSACTION
, you'd need to ensure that your transaction isolation level is set correctly.
For example, SERIALIZABLE
isolation level will prevent lost updates when the code runs concurrently, but READ COMMITTED
(the default in SQL Server Management Studio) will not.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
As others have already mentioned, whilst ensuring consistency, this can cause blocking and deadlocks and so may not be the best solution in practice.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With