Now me and a coworker are arguing about the effect of non-vital BEGIN TRAN....COMMIT TRAN blocks. I've written about 140 Stored Procedures for simple insert-update-delete operations and since we may later need to do some extra operations in them, I've already included the might-be-necessary BEGIN TRAN and COMMIT TRAN blocks like so:
CREATE PROCEDURE [Users].[Login_Insert]
@Username nvarchar (50) OUTPUT,
@Password char (40),
@FullName nvarchar (150),
@LoginTypeId int
AS
SET NOCOUNT ON;
BEGIN TRY
BEGIN TRAN
INSERT [Users].[Login]
(
[Username],
[Password],
[FullName],
[LoginTypeId]
)
VALUES
(
@Username,
@Password,
@FullName,
@LoginTypeId
)
COMMIT TRAN
RETURN 1
END TRY
BEGIN CATCH
ROLLBACK TRAN
RETURN -1
END CATCH
GO
Now many of these transactions may never be necessary. Are these extraneous blocks going to affect the performance in a noticeable manner? Thanks in advance.
Not enough to notice.
That is, each TXN will be open for an extra OhNoSecond between BEGIN TRAN and INSERT. I'd be impressed if anyone could measure it.
However, if you did BEGIN TRAN then prompted for user input, your legs need breaking...
Good idea though: I do this so all my write procs are 100% consistent, have same error handling, can be nested etc
Edit: After Remus' answer, I see I didn't link to my nest TXN template: Nested stored procedures containing TRY CATCH ROLLBACK pattern? This is different to Remus' in that it always rolls back and has not SAVEPOINTs
Edit, a quick and dirty test shows it's quicker around 2/3 of the time with the transaction
SET NOCOUNT ON
SET STATISTICS IO OFF
DECLARE @date DATETIME2
DECLARE @noTran INT
DECLARE @withTran INT
SET @noTran = 0
SET @withTran = 0
DECLARE @t TABLE (ColA INT)
INSERT @t VALUES (1)
DECLARE
@count INT,
@value INT
SET @count = 1
WHILE @count < 100
BEGIN
SET @date = GETDATE()
UPDATE smalltable SET smalltablename = CASE smalltablename WHEN 'test1' THEN 'test' ELSE 'test2' END WHERE smalltableid = 1
SET @noTran = @noTran + DATEDIFF(MICROSECOND, @date, GETDATE())
SET @date = GETDATE()
BEGIN TRAN
UPDATE smalltable SET smalltablename = CASE smalltablename WHEN 'test1' THEN 'test' ELSE 'test2' END WHERE smalltableid = 1
COMMIT TRAN
SET @withTran = @withTran + DATEDIFF(MICROSECOND, @date, GETDATE())
SET @count = @count + 1
END
SELECT
@noTran / 1000000. AS Seconds_NoTransaction,
@withTran / 1000000. AS Seconds_WithTransaction
Seconds_NoTransaction Seconds_WithTransaction
2.63200000 2.70400000
2.16700000 2.12300000
Reversing the order of update keeps the same behaviour
In the code you posted there will be no measurable effect, but transactions do have effect on performance, they can dramatically improve performance due to log flush commit grouping or they can dramatically reduce performance due to incorrectly managed contention issues. But the bottom line is that when transactions are needed for correctness you cannot skip having them. That being said, your template is actually quite bad vis-a-vis transactions and try-catch blocks. Transcation in a catch block must have a tri-state logic check for XACT_STATE
return values (-1, 0, 1) and properly handle doomed transactions. See Exception handling and nested transactions for an example.
also, you should never ever mix try-catch error handling with return code error handling. Pick one and stick with it, preferably try-catch. In other words, your stored procedure should RAISE, not return -1. Mixing exception with error codes makes your code a nightmare to maintain and properly call.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With