Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Are doubles faster than floats in C#?

People also ask

Should I use double or float in C?

Double is more precise than float and can store 64 bits, double of the number of bits float can store. Double is more precise and for storing large numbers, we prefer double over float.

Why is double preferred over float?

Both double-type and float-type can be used to represent floating-point numbers in Java. A double-type is preferred over float-type if the more precise and accurate result is required. The precision of double-type is up to 15 to 16 decimal points while the precision of float type is only around 6 to 7 decimal digits.

Is float equal to double in C?

Difference between float and double in C/C++ In simple words it could be state that double has 2x more precision as compare than float which means that double data type has double precision than as compare to that of float data type.

What is the difference between doubles and floats?

A float has 7 decimal digits of precision and occupies 32 bits . A double is a 64-bit IEEE 754 double-precision floating-point number. 1 bit for the sign, 11 bits for the exponent, and 52 bits for the value. A double has 15 decimal digits of precision and occupies a total of 64 bits .


The short answer is, "use whichever precision is required for acceptable results."

Your one guarantee is that operations performed on floating point data are done in at least the highest precision member of the expression. So multiplying two float's is done with at least the precision of float, and multiplying a float and a double would be done with at least double precision. The standard states that "[floating-point] operations may be performed with higher precision than the result type of the operation."

Given that the JIT for .NET attempts to leave your floating point operations in the precision requested, we can take a look at documentation from Intel for speeding up our operations. On the Intel platform your floating point operations may be done in an intermediate precision of 80 bits, and converted down to the precision requested.

From Intel's guide to C++ Floating-point Operations1 (sorry only have dead tree), they mention:

  • Use a single precision type (for example, float) unless the extra precision obtained through double or long double is required. Greater precision types increase memory size and bandwidth requirements. ...
  • Avoid mixed data type arithmetic expressions

That last point is important as you can slow yourself down with unnecessary casts to/from float and double, which result in JIT'd code which requests the x87 to cast away from its 80-bit intermediate format in between operations!

1. Yes, it says C++, but the C# standard plus knowledge of the CLR lets us know the information for C++ should be applicable in this instance.


I just read the "Microsoft .NET Framework-Application Development Foundation 2nd" for the MCTS exam 70-536 and there is a note on page 4 (chapter 1):

NOTE Optimizing performance with built-in types
The runtime optimizes the performance of 32-bit integer types (Int32 and UInt32), so use those types for counters and other frequently accessed integral variables. For floating-point operations, Double is the most efficient type because those operations are optimized by hardware.

It's written by Tony Northrup. I don't know if he's an authority or not, but I would expect that the official book for the .NET exam should carry some weight. It is of course not a gaurantee. I just thought I'd add it to this discussion.


I profiled a similar question a few weeks ago. The bottom line is that for x86 hardware, there is no significant difference in the performance of floats versus doubles unless you become memory bound, or you start running into cache issue. In that case floats will generally have the advantage because they are smaller.

Current Intel CPUs perform all floating point operations in 80 bit wide registers so the actual speed of the computation shouldn't vary between floats and doubles.


If load & store operations are the bottleneck, then floats will be faster, because they're smaller. If you're doing a significant number of calculations between loads and stores, it should be about equal.

Someone else mentioned avoiding conversions between float & double, and calculations that use operands of both types. That's good advice, and if you use any math library functions that return doubles (for example), then keeping everything as doubles will be faster.


I'm writing a ray tracer, and replacing the floats with doubles for my Color class gives me a 5% speedup. Replacing the Vectors floats with doubles is another 5% faster! Pretty cool :)

That's with a Core i7 920


With 387 FPU arithmetic, float is only faster than double for certain long iterative operations like pow, log, etc (and only if the compiler sets the FPU control word appropriately).

With packed SSE arithmetic, it makes a big difference though.


Matthijs,

You are wrong. 32-bit is far more efficient than 16-bit - in modern processors... Perhaps not memory-wise, but in effectiveness 32-bit is the way to go.

You really should update your professor to something more "up-to-date". ;)

Anyway, to answer the question; float and double has exactly the same performance, at least on my Intel i7 870 (as in theory).

Here are my measurements:

(I made an "algorithm" that I repeated for 10,000,000 times, and then repeated that for 300 times, and out of that I made a average.)

double
-----------------------------
1 core  = 990 ms
4 cores = 340 ms
6 cores = 282 ms
8 cores = 250 ms

float
-----------------------------
1 core  = 992 ms
4 cores = 340 ms
6 cores = 282 ms
8 cores = 250 ms