After searching a long time for a performance bug, I read about denormal floating point values.
Apparently denormalized floating-point values can be a major performance concern as is illustrated in this question: Why does changing 0.1f to 0 slow down performance by 10x?
I have an Intel Core 2 Duo and I am compiling with gcc, using -O2
.
So what do I do? Can I somehow instruct g++ to avoid denormal values? If not, can I somehow test if a float
is denormal?
Wait. Before you do anything, do you actually know that your code is encountering denormal values, and that they're having a measurable performance impact?
Assuming you know that, do you know if the algorithm(s) that you're using is stable if denormal support is turned off? Getting the wrong answer 10x faster is not usually a good performance optimization.
Those issues aside:
If you want to detect denormal values to confirm that their presence, you have a few options. If you have a C99 standard library or Boost, you can use the fpclassify
macro. Alternatively, you can compare the absolute values of your data to the smallest positive normal number.
You can set the hardware to flush denormal values to zero (FTZ), or treat denormal inputs as zero (DAZ). The easiest way, if it is properly supported on your platform, is probably to use the fesetenv( )
function in the C header fenv.h
. However, this is one of the least-widely supported features of the C standard, and is inherently platform specific anyway. You may want to just use some inline assembly to directly set the FPU state to (DAZ/FTZ).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With