Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Which is the best practice to use try - catch blocks with foreach loop? [closed]

Tags:

c#

what is the best practice to use try{} catch{} blocks regarding performance ?

foreach (var one in all)
{
    try
    {
        //do something
    }
    catch { }
}

Or

try
{
    foreach (var one in all)
    {
        // do something
    }
}
catch { }
like image 808
Rafik Bari Avatar asked Jun 26 '14 16:06

Rafik Bari


1 Answers

As per request, here is my cool answer. Fun part will be at end, so if you already know what try-catch is, feel free to scroll. (Sorry for partial off-topic)

Lets start by answering concept of try-catch in general.

Why? Because this question suggest lack of full knowledge how, and when, to use this feature.

What is try-catch? Or rather, what is try-catch-finally.

(This chapter is also known as: Why the hell have you not used Google to learn about it yet?)

  1. Try - potentially unstable code, which means you should move all stable parts out of it. It is executed always, but without guaranty of completion.

  2. Catch - here you place code designed to correct failure which occurred in Try part. It is executed only when exception occurred in Try block.

  3. Finally - its third and last part, which in some languages may not exists. It is always executed. Typically it is used to release memory and close I/O streams.

In general, try catch is a way to separate potentially unstable code from rest of program. In terms of machine language it can be shortened to placing values of all processor registers on stack to save them from corruption and then informing environment to ignore execution errors as they will be manually handled by code.

Whats the best practice of using try-catch blocks?

Not using them at all. Covering code with try-catch means that you are expecting it to fail. Why code fails? Because its badly written. It is much better, both for performance and quality, to write code that need no try-catch to work safely.

Sometimes, especially when using third-party code, try-catch is easiest and most dependable option, but most of the time using try-catch on your own code indicates design issues.

Examples:

  1. Data parsing - Using try-catch in data parsing is very, very bad. There are tons of ways to safely parse even weirdest data. One of ugliest of them is Regular Expression approach (got problem? Use regexp, problems love to be plural). String to Int conversion failed? Check your data first, .NET even provides methods like TryParse.

  2. Division by zero, precision problems, numerical overflow - do not cover it with try-catch, instead upgrade your code. Arithmetic code should start as good math equation. Of course you can heavily modify mathematical equations to run a lot faster (for example by 0x5f375a86), but you still need good math to begin with.

  3. List index out of bounds, stack overflow, segmentation fault, Heartbleed - Here you have even bigger fault in code design. Those errors should simply not happen in properly written code running in healthy environment. All of them come to one simple error, code have made sure that index (memory address) is in expected boundaries.

  4. I/O errors - Before attempting to use stream (memory, file, network), first step is to check if stream exists (not null, file exists, connection open). Then you check if stream is correct - is your index in its size? Is stream ready to use? Is its queue/buffer capacity big enough for your data? All this can be done without single try catch. Especially when you work under framework (.NET, Java, etc).

    Of course there is still problem of unexpected access issues - rat munched your network cable, hard disk drive melted. Here usage of try-catch can not only be forgiven but should occur. Still, it needs to be done in proper manner, such as this example for files. You should not place whole stream manipulating code in try-catch, instead use built in methods to check its state.

  5. Bad external code - When you get to work with horrible code library, without any means of correcting it (welcome to corporate world), try-catch is often only way to protect rest of your code. But yet again, only code that is directly dangerous (call to horrible function in badly written library) should be placed in try-catch.

So when should you use try-catch and when you shouldn't?

It can be answered with very simple question.

Can I correct code to not need try-catch?

Yes? Then drop that try-catch and fix your code.

No? Then pack unstable part in try-catch and provide good error handling.

How to handle exceptions in Catch?

First step is to know what type of exception can occur. Modern environments provide easy way to segregate exceptions in classe. Catch most specific exception as you can. Doing I/O? Catch I/O. Doing math? Catch Arithmetic ones.

What user should know?

Only what user can control:

  • Network error - check your cables.
  • File I/O error - format c:.
  • Out of memory - upgrade.

Other exceptions will just inform user on how badly your code is written, so stick to mysterious Internal Error.

Try-catch in loop or outside of it?

As plenty of people said, there is no definitive answer to this question. It all depends on what code you committed.

General rule could be: Atomic tasks, each iteration is independent - try-catch inside of loop. Chain-computation, each iteration depends on previous ones - try-catch around the loop.

How different for and foreach are?

Foreach loop does not guarantee in-order execution. Sounds weird, almost never occur, but is still possible. If you use foreach for tasks it was created (dataset manipulation), then you might want to place try-catch around it. But as explained, you should try to not catch yourself using try-catch too often.

Fun part!

The real reason for this post is just few lines from you, dear readers!

As per Francine DeGrood Taylor request I will write a bit more on fun part. Have in mind that, as Joachim Isaksson noticed, its is very odd at first sight.

Although this part will focus on .NET, it can apply to other JIT compilers and even partially to assembly.

So.. how it is possible that try-catch around loop is able to speed it up? It just does not make any sense! Error handling means additional computation!

Check this Stackoverflow question about it: Try-catch speeding up my code? You can read .NET specific stuff there, here I will try to focus on how to abuse it. Have in mind that this question is from 2012 so it can as well be "corrected" (it is not a bug, its a feature!) in current .NET releases.

As explained above, try-catch is separating piece of code from rest of it. Process of separation works in similar manner to methods, so instead of try-catch, you could also place loop with heavy computations in separate method.

How separating code can speed it up? Registers. Network is slower than HDD, HDD than RAM, RAM is a slowpoke when compared to ultrafast CPU Cache. And there are also CPU Registers, which laugh at how slow Cache is.

Separating code usually means freeing up all general purpose registers - and that's exactly what try-catch is doing. Or rather, what JIT is doing due to try-catch.

Most prominent flaw of JIT is lack of precognition. It sees loop, it compiles loop. And when it finally notices that loop will execute several thousands times and boast calculations which make CPU squeak it is too late to free registers up. So code in loop must be compiled to use whats left of registers.

Even one additional register can produce enormous boost in performance. Every memory access is horribly long, which means that CPU can be unused for noticeable amount of time. Although nowdays we got Out-of-Order execution, cute pipelines and prefetching, there are still blocking operations which force code to halt.

And now lets talk why x86 sucks and is trash when compared to x64. The try-catch speed gain in linked SE question did not occur when compiled for x64, why?

Because there was no speed-gain to begin with. All that existed was speed-loss caused by crappy JIT output (classic compilers do not have this issue). Try-catch corrected JIT behavior mostly by accident.

x86 registers were created for certain tasks. x64 architecture doubled their size, but it still can't change the fact that when doing loop you must sacrifice CX, and similar goes for other registers (except poor orphan BX).

So why x64 is so awesome? It boasts 8 additional 64bit wide registers without any specific purpose. You can use them for anything. Not just theoretically like with x88 registers, but really for anything. Eight 64 bit registers means eight 64bit variables stored directly in CPU registers instead of RAM without any problem for doing math (which requires AX and DX for results quite often). What also 64bit means? x86 can fit Int into register, x64 can fit Long. If math block will have empty registers to work at, it can do most of work without touching memory. And that's the real speed boost.

But it is not the end! You can also abuse Cache. The closer Cache gets to CPU, the faster it becomes, but it also will be smaller (cost and physical size are limits). If you will optimize your dataset to fill in Cache at once, eg. date chunks in size of half of L1, leave other half for code and whatever CPU finds necessary in cache (you can not really optimize it unless you use assembly, in high level languages you have to "guestimate"). Usually each (physical) core have its own L1 memory, which means you can process several cached chunks at once (but it won't be always worthy overhead from creating threads).

Worthy of mentioning is that old Pascal/Delphi used "16 bit dinosaurs" in age of 32bit processors in several vital functions (which made them two times slower than 32bit ones from C/C++). So love your CPU Registers, even poor old BX. They are very grateful.

To add a bit more, as this became rather insane post already, why C#/Java can be at same time slower and faster than native code? JIT is the answer, framework code (IL) is translated to machine language, which means that long calculation blocks will execute just as native code of C/C++. However remember that you can easily use native components in .NET (in Java you can get crazy by attempting it). For computation complex enough you can cover overhead from switching managed-native modes with speed-gain of native code (and native code can be boosted by asm injects).

like image 98
PTwr Avatar answered Oct 13 '22 08:10

PTwr