Possible Duplicate:
Measuring exception handling overhead in C++
Performance when exceptions are not thrown (C++)
I have heard anecdotally that using "try" blocks in C++ slows down the code at run-time even if no exceptions occur. I have searched but have been unable to find any explanation or substantiation for this. Does anyone know if this is true & if so why?
The answer, as usually, is "it depends".
It depends on how exception handling is implemented by your compiler.
If you're using MSVC and targeting 32-bit Windows, it uses a stack-based mechanism, which requires some setup code every time you enter a try block, so yes, that means you incur a penalty any time you enter such a block, even if no exception is thrown.
Practically every other platform (other compilers, as well as MSVC targeting 64-bit Windows) use a table-based approach where some static tables are generated at compile-time, and when an exception is thrown, a simple table lookup is performed, and no setup code has to be injected into the try blocks.
There are two common ways of implementing exceptions.
One, sometimes refered to as "table-based" or "DWARF", uses static data to specify how to unwind the stack from any given point; this has no runtime overhead except when an exception is thrown.
The other, sometime referred to as "stack-based", "setjmp-longjmp" or "sjlj", maintains dynamic data to specify how to unwind the current call stack. This has some runtime overhead whenever you enter or leave a try
block, and whenever you create or destroy an automatic object with a non-trivial destructor.
The first is more common in modern compilers (certainly GCC has done this by default for many years); you'll have to check your compiler documentation to see which it uses, and whether it's configurable.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With