Has anyone found any performance boost/tradoff of using BEGIN TRY..END TRY in sql server 2008, vs the old IF @@ERROR <> 0? Just curious to know if there are performance penalties or not.
This answer will obviously vary from compiler to compiler and application to application, but generally yes — “try and catch” is slower than some of the other canonical alternatives to error handling.
In general, wrapping your Java code with try/catch blocks doesn't have a significant performance impact on your applications. Only when exceptions actually occur is there a negative performance impact, which is due to the lookup the JVM must perform to locate the proper handler for the exception.
With a try catch, you can handle an exception that may include logging, retrying failing code, or gracefully terminating the application. Without a try catch, you run the risk of encountering unhandled exceptions. Try catch statements aren't free in that they come with performance overhead.
THe performance of TRY ... CATCH is most likely a bit more when there are no errors. On the other hand, it will be faster in many cases where there is an error. But, you should not code strictly for performance anyway.
Since the database disk hit issues are identical, there should be no appreciable performance issues.
This is an old question, but in 2012 Aaron Bertrand wrote a detailed article Performance impact of different error handling techniques where he compared few approaches for dealing with exceptions in SQL Server and I thought it is worth mentioning it here.
He says, that the primary approaches people use to deal with exceptions are:
- Just let the engine handle it, and bubble any exception back to the caller.
- Use
BEGIN TRANSACTION
andROLLBACK
if@@ERROR <> 0
.- Use
TRY/CATCH
withROLLBACK
in theCATCH
block (SQL Server 2005+).And many take the approach that they should check if they're going to incur the violation first, since it seems cleaner to handle the duplicate yourself than to force the engine to do it.
His conclusions are:
If we think we are going to have a high rate of failure, or have no idea what our potential failure rate will be, then checking first to avoid violations in the engine is going to be tremendously worth our while. Even in the case where we have a successful insert every time, the cost of checking first is marginal and easily justified by the potential cost of handling errors later (unless your anticipated failure rate is exactly 0%).
This is the chart from the article:
CheckInsert | Checks `IF EXISTS` first | Simple `INSERT`
CheckRollback | Checks `IF EXISTS` first | Use `IF @@ERROR <> 0`
CheckTryCatch | Checks `IF EXISTS` first | Use `TRY CATCH`
JustInsert | | Simple `INSERT`
JustRollback | | Use `IF @@ERROR <> 0`
JustTryCatch | | Use `TRY CATCH`
Since SQL 2005, you should try to use the TRY CATCH way to handle the exceptions or logging the errors. It is considered as best practice. There should have no major performance hit by using it.
BEGIN TRY
BEGIN TRANSACTION
-- SQL
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK
SELECT
ERROR_MESSAGE(),
ERROR_NUMBER() -- etc
END CATCH
Forget performance, it's a damn sight safer, better and more predicatable.
However, using @@ERROR usually requires a GOTO and/or lots of IF statements to manage it correctly so I'd guess there may be a tiny boost in TRY..CATCH.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With