Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I unit test performance optimisations in C?

I've been working on a portable C library that does image processing.

I've invested quite some time on a couple of low level functions so as to take advantage of GCC auto-vectorization (SSE and/or AVX depending on target processor) mode while still preserve a somewhat portable C code (extensions used: restrict and __builtin_assume_aligned).

Now is time to test the code on windows (MSVC compiler). But before that I'd like to setup some kind of unit testing so as not to shoot myself in the foot and loose all my carefully chosen instructions to preserve GCC auto-vectorization code as-is.

I could simply #ifdef/#endif the whole body function, but I am thinking of a more long term solution that would detect upon compiler update(s) of any regression.

I am fairly confident with unit testing (there are tons of good framework out there), but I am a lot less confident with unit-testing of such low level functionality. How does one integrate performance unit testing in CI service such as jenkins ?

PS: I'd like to avoid storing hard-coded timing results based on a particular processor, eg:

// start timer:
gettimeofday(&t1, NULL);
// call optimized function:
...
// stop timer:
gettimeofday(&t2, NULL);
// hard code some magic number:
if( t2.tv_sec - t1.tv_sec > 42 ) return EXIT_FAILURE;
like image 560
malat Avatar asked Oct 30 '22 08:10

malat


1 Answers

Your problem basically boils down into two parts:

  1. What's the best way to performance benchmark your carefully optimized code?

  2. How to compare the results of the comparisons so you can detect if code changes and/or compiler updates have affected the performance of your code

The google benchmark framework might provide a reasonable approach to problem #1. It is C++, but that wouldn't stop you from calling your C functions up from it.

This library can produce summary reports in various formats, including JSON and good old CSV. You could arrange for these to be stored somewhere per run.

You could then write a simple perl/python/etc script to compare the results of the benchmarks and raise the alarm if they deviate by more than some threshold.

One thing you will have to be careful about is the potential for noise in your results caused by variables such as load on the system performing the test. You didn't say much about the environment you are running the tests in, but if it is (for example) a VM on a host containing other VMs then your test results may be skewed by whatever is going on in the other VMs.

CI frameworks such as Jenkins allow you to script up the actions to be taken when running tests, so it should be relatively easy to integrate this approach into such frameworks.

like image 63
harmic Avatar answered Nov 14 '22 02:11

harmic