Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Computationally intensive algorithms

I am planning to write a bunch of programs on computationally intensive algorithms. The programs would serve as an indicator of different compiler/hardware performance.

I would want to pick up some common set of algorithms which are used in different fields, like Bioinformatics, Gaming, Image Processing, et al. The reason I want to do this would be to learn the algorithms and have a personal mini benchmark suit that would be small | useful | easy to maintain.

Any advice on algorithm selection would be tremendously helpful.

like image 505
Sayan Avatar asked Mar 21 '11 07:03

Sayan


3 Answers

Benchmarks are not worthy of your attention!

The very best guide is to processor performance is: http://www.agner.org/optimize/

And then someone will chuck it in a box with 3GB of RAM defeating dual-channel hopes and your beautifully tuned benchmark will give widely different results again.

If you have a piece of performance critical code and you are sure you've picked the winning algorithm you can't then go use a generic benchmark to determine the best compiler. You have to actually compile your specific piece of code with each compiler and benchmark them with that. And the results you gain, whilst useful for you, will not extrapolate to others.

Case in point: people who make compression software - like zip and 7zip and the high-end stuff like PPMs and context-mixing and things - are very careful about performance and benchmark their programs. They hang out on www.encode.ru

And the situation is this: for engineers developing the same basic algorithm - say LZ or entropy coding like arithmetic-coding and huffman - the engineers all find very different compilers are better.

That is to say, two engineers solving the same problem with the same high-level algorithm will each benchmark their implementation and have results recommending different compilers...

(I have seen the same thing repeat itself repeatedly in competition programming e.g. Al Zimmermann's Programming Contests which is an equally performance-attentive community.)

(The newer GCC 4.x series is very good all round, but that's just my data-point, others still favour ICC)

(Platform benchmarks for IO-related tasks is altogether another thing; people don't appreciate how differently Linux, Windows and FreeBSD (and the rest) perform when under stress. And benchmarks there - on the same workload, same machine, different machines or different core counts - would be very generally informative. There aren't enough benchmarks like that about sadly.)

like image 64
Will Avatar answered Oct 06 '22 20:10

Will


There was some work done at Berkeley about this a few years ago. The identified 13 common application paterns for parallel programming, the "13 Dwarves". The include things like linear algebra, n-body models, FFTs etc

http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html

See page 10 onwards.

There are some sample .NET implementations here:

http://paralleldwarfs.codeplex.com/

like image 29
Ade Miller Avatar answered Oct 06 '22 21:10

Ade Miller


The typical one is Fast Fourier Transform, perhaps you can also do something like the Lucas–Lehmer primality test.

like image 35
Brian Avatar answered Oct 06 '22 19:10

Brian