Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Inconsistent multiplication performance with floats

While testing the performance of floats in .NET, I stumbled unto a weird case: for certain values, multiplication seems way slower than normal. Here is the test case:

using System;
using System.Diagnostics;

namespace NumericPerfTestCSharp {
    class Program {
        static void Main() {
            Benchmark(() => float32Multiply(0.1f), "\nfloat32Multiply(0.1f)");
            Benchmark(() => float32Multiply(0.9f), "\nfloat32Multiply(0.9f)");
            Benchmark(() => float32Multiply(0.99f), "\nfloat32Multiply(0.99f)");
            Benchmark(() => float32Multiply(0.999f), "\nfloat32Multiply(0.999f)");
            Benchmark(() => float32Multiply(1f), "\nfloat32Multiply(1f)");
        }

        static void float32Multiply(float param) {
            float n = 1000f;
            for (int i = 0; i < 1000000; ++i) {
                n = n * param;
            }
            // Write result to prevent the compiler from optimizing the entire method away
            Console.Write(n);
        }

        static void Benchmark(Action func, string message) {
            // warm-up call
            func();

            var sw = Stopwatch.StartNew();
            for (int i = 0; i < 5; ++i) {
                func();
            }
            Console.WriteLine(message + " : {0} ms", sw.ElapsedMilliseconds);
        }
    }
}

Results:

float32Multiply(0.1f) : 7 ms
float32Multiply(0.9f) : 946 ms
float32Multiply(0.99f) : 8 ms
float32Multiply(0.999f) : 7 ms
float32Multiply(1f) : 7 ms

Why are the results so different for param = 0.9f?

Test parameters: .NET 4.5, Release build, code optimizations ON, x86, no debugger attached.

like image 372
Asik Avatar asked Dec 20 '12 02:12

Asik


1 Answers

As others have mentioned, various processors do not support normal-speed calculations when subnormal floating-point values are involved. This is either a design defect (if the behavior impairs your application or is otherwise troublesome) or a feature (if you prefer the cheaper processor or alternative use of silicon that was enabled by not using gates for this work).

It is illuminating to understand why there is a transition at .5:

Suppose you are multiplying by p. Eventually, the value becomes so small that the result is some subnormal value (below 2-126 in 32-bit IEEE binary floating point). Then multiplication becomes slow. As you continue multiplying, the value continues decreasing, and it reaches 2-149, which is the smallest positive number that can be represented. Now, when you multiply by p, the exact result is of course 2-149p, which is between 0 and 2-149, which are the two nearest representable values. The machine must round the result and return one of these two values.

Which one? If p is less than ½, then 2-149p is closer to 0 than to 2-149, so the machine returns 0. Then you are not working with subnormal values anymore, and multiplication is fast again. If p is greater than ½, then 2-149p is closer to 2-149 than to 0, so the machine returns 2-149, and you continue working with subnormal values, and multiplication remains slow. If p is exactly ½, the rounding rules say to use the value that has zero in the low bit of its significand (fraction portion), which is zero (2-149 has 1 in its low bit).

You report that .99f appears fast. This should end with the slow behavior. Perhaps the code you posted is not exactly the code for which you measured fast performance with .99f? Perhaps the starting value or the number of iterations were changed?

There are ways to work around this problem. One is that the hardware has mode settings that specify to change any subnormal values used or obtained to zero, called “denormals as zero” or “flush to zero” modes. I do not use .NET and cannot advise you about how to set these modes in .NET.

Another approach is to add a tiny value each time, such as

n = (n+e) * param;

where e is at least 2-126/param. Note that 2-126/param should be calculated rounded upward, unless you can guarantee that n is large enough that (n+e) * param does not produce a subnormal value. This also presumes n is not negative. The effect of this is to make sure the calculated value is always large enough to be in the normal range, never subnormal.

Adding e in this way of course changes the results. However, if you are, for example, processing audio with some echo effect (or other filter), then the value of e is too small to cause any effects observable by humans listening to the audio. It is likely too small to cause any change in the hardware behavior when producing the audio.

like image 150
Eric Postpischil Avatar answered Sep 23 '22 15:09

Eric Postpischil