I did some timing tests and also read some articles like this one (last comment), and it looks like in Release build, float and double values take the same amount of processing time.
How is this possible? When float is less precise and smaller compared to double values, how can the CLR get doubles into the same processing time?
Float and double Double is more precise than float and can store 64 bits, double of the number of bits float can store. Double is more precise and for storing large numbers, we prefer double over float. For example, to store the annual salary of the CEO of a company, double will be a more accurate choice.
Double is more precise than float and can store 64 bits, double of the number of bits float can store. Double is more precise and for storing large numbers, we prefer double over float. For example, to store the annual salary of the CEO of a company, double will be a more accurate choice.
If you compile to 64-bit, double will be faster. Compiled to 32-bit on 64-bit (machine and OS) made float around 30% faster:
If the hardware is (or is like) x86 with legacy x87 math, float and double are both extended (for free) to an internal 80-bit format, so both have the same performance (except for cache footprint / memory bandwidth)
On x86 processors, at least, float
and double
will each be converted to a 10-byte real by the FPU for processing. The FPU doesn't have separate processing units for the different floating-point types it supports.
The age-old advice that float
is faster than double
applied 100 years ago when most CPUs didn't have built-in FPUs (and few people had separate FPU chips), so most floating-point manipulation was done in software. On these machines (which were powered by steam generated by the lava pits), it was faster to use float
s. Now the only real benefit to float
s is that they take up less space (which only matters if you have millions of them).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With