Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why this performance difference? (Exception catching)

After reading a question here about what things our computer could do in one second I made a little test I had in mind for a while and I'm very surprised by the results. Look:

Simple program to catch a null exception, takes almost one second to do 1900 iterations:

for(long c = 0; c < 200000000; c++)
{
    try
    {
        test = null;
        test.x = 1;
    }
    catch (Exception ex)
    {
    }
}

Alternatively, checking if test == null before doing the assignation, the same pogram can do aprox 200000000 iterations in one second.

for(long c = 0; c < 1900; c++)
{
    test = null;
    f (!(test == null))
    {
        test.x = 1;
    }
}

Anyone has a detailed explanation on why this HUGE diference ?

EDIT: Running the test in Release mode, outside Visual studio i'm getting 35000-40000 iterations vs 400000000 iterations (always aprox)

Note I'm running this with a crappy PIV 3.06Ghz

like image 969
Drevak Avatar asked Dec 07 '09 14:12

Drevak


2 Answers

There's no way that should take a second for 1900 iterations unless you're running in the debugger. Running performance tests under the debugger is a bad idea.

EDIT: Note that this isn't a case of changing to the release build - it's a case of running without the debugger; i.e. hitting Ctrl-F5 instead of F5.

Having said that, provoking exceptions when you can avoid them very easily is also a bad idea.

My take on the performance of exceptions: if you're using them appropriately, they shouldn't cause significant performance issues unless you're in some catastrophic situation anyway (e.g. you're trying to make hundreds of thousands of web service calls and the network is down).

Exceptions are expensive under debuggers - certainly in Visual Studio, anyway - due to working out whether or not to break into the debugger etc, and probably doing any amount of stack analysis which is unnecessary otherwise. They're still somewhat expensive anyway, but you shouldn't be throwing enough of them to notice. There's still stack unwinding to do, relevant catch handlers to find, etc - but this should only be happening when something's wrong in the first place.

EDIT: Sure, throwing an exception is still going to give you fewer iterations per second (although 35000 is still a very low number - I'd expect over 100K) because you're doing almost nothing in the non-exception case. Let's look at the two:

Non-exception version of the loop body

  • Assign null to variable
  • Check whether variable is null; it is, so go back to the top of the loop

(As mentioned in the comments, it's quite possible that the JIT will optimise this away anyway...)

Exception version:

  • Assign null to variable
  • Dereference variable
    • Implicit check for nullity
    • Create an exception object
    • Check for any filtered exception handlers to call
    • Look up the stack for the catch block to jump to
    • Check for any finally blocks
    • Branch appropriately

Is it any wonder that you're seeing less performance?

Now compare that with the more common situation where you do a whole bunch of work, possibly IO, object creation etc - and maybe an exception is thrown. Then the difference becomes a lot less significant.

like image 199
Jon Skeet Avatar answered Sep 23 '22 10:09

Jon Skeet


Check out Chris Brumme's blog with special attention to the Performance and Trends section for an explanation on why exceptions are slow. They are called 'exceptions' for a reason: they should not happen very often.

like image 21
Gonzalo Avatar answered Sep 22 '22 10:09

Gonzalo