Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

`float` vs `double` in casual single computations?

I see some people write code where casual uses of something that needs floating point is declared to use a float type (in C/C++). This isn't a huge matrix where space matters, or trying to fit more into SIMD, or anything like that. Little things like scaling a value by a percentage, or finding a ratio between quantities.

I'd always used double and considered float to be only for when space really mattered. I recall the days when desktop CPUs did not have floating point hardware and there might be something said about performance of the software library, but with the first ’287 co-processor the native precision was 80 bits anyway and float was just for the long-term storage of values in RAM or in files, and didn't affect computation speed one iota.

Today, is there any reason to use float instead of double in such habitual manner? Situation 1: specific to PC/Mac hardware; Situation 2: portable code which may be on desktops and mobile devices like phones too.

Should I educate my team the way I remember: Hey, you know, float is a half sized thing and double is normal. or is there some trade off or reason why C++ programmers would use float everywhere and seemingly (to my POV) not know that double existed?

My question is not specific to the language, but my terminology is that float is 4 bytes and double is 8 bytes.

like image 739
JDługosz Avatar asked Nov 09 '22 21:11

JDługosz


1 Answers

As I point out in this answer, there are a few ways in which float can be faster than double, but in general unless you're sure that floating-point is the bottleneck, I would suggest to stick to double. It will also avoid problems like this, this and this.

An obvious exception is if there is only hardware support is for single precision (such as Cortex M4).

like image 155
Simon Byrne Avatar answered Jan 04 '23 02:01

Simon Byrne