Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Should we generally use float literals for floats instead of the simpler double literals?

In C++ (or maybe only our compilers VC8 and VC10)3.14 is a double literal and 3.14f is a float literal.

Now I have a colleague that stated:

We should use float-literals for float calculations and double-literals for double calculations as this could have an impact on the precision of a calculation when constants are used in a calcualtion.

Specifically, I think he meant:

double d1, d2; float f1, f2; ... init and stuff ... f1 = 3.1415  * f2; f1 = 3.1415f * f2; // any difference? d1 = 3.1415  * d2; d1 = 3.1415f * d2; // any difference? 

Or, added by me, even:

d1 = 42    * d2; d1 = 42.0f * d2; // any difference? d1 = 42.0  * d2; // any difference? 

More generally, the only point I can see for using 2.71828183f is to make sure that the constant I'm trying to specify will actually fit into a float (compiler error/warning otherwise).

Can someone shed some light on this? Do you specify the f postfix? Why?

To quote from an answer what I implicitly took for granted:

If you're working with a float variable and a double literal the whole operation will be done as double and then converted back to float.

Could there possibly be any harm in this? (Other than a very, very theoretical performance impact?)

Further edit: It would be nice if answers containing technical details (appreciated!) could also include how these differences affect general purpose code. (Yes, if you're number crunching, you probably like to make sure your big-n floating point ops are as efficient (and correct) as possible -- but does it matter for general purpose code that's called a few times? Isn't it cleaner if the code just uses 0.0 and skips the -- hard to maintain! -- float suffix?)

like image 239
Martin Ba Avatar asked Oct 05 '11 13:10

Martin Ba


People also ask

Why do we typically use float instead of double?

double has higher precision, whereas floats take up less memory and are faster. In general you should use float unless you have a case where it isn't accurate enough. On typical modern computers, double is just as fast as float.

Is it better to use float or double?

Double is more precise than float and can store 64 bits, double of the number of bits float can store. Double is more precise and for storing large numbers, we prefer double over float.

Is float more efficient than double?

double has 2x more precision than float. float is a 32-bit IEEE 754 single precision Floating Point Number – 1 bit for the sign, 8 bits for the exponent, and 23* for the value.

What is the difference between floating-point literal and double type literal?

Answer. float literals have a size of 32 bits so they can store a fractional number with around 6-7 total digits of precision. double literals have a size of 64 bits so they can store a fractional number with 15-16 total digits of precision.


2 Answers

Yes, you should use the f suffix. Reasons include:

  1. Performance. When you write float foo(float x) { return x*3.14; }, you force the compiler to emit code that converts x to double, then does the multiplication, then converts the result back to single. If you add the f suffix, then both conversions are eliminated. On many platforms, each those conversions are about as expensive as the multiplication itself.

  2. Performance (continued). There are platforms (most cellphones, for example), on which double-precision arithmetic is dramatically slower than single-precision. Even ignoring the conversion overhead (covered in 1.), every time you force a computation to be evaluated in double, you slow your program down. This is not just a "theoretical" issue.

  3. Reduce your exposure to bugs. Consider the example float x = 1.2; if (x == 1.2) // something; Is something executed? No, it is not, because x holds 1.2 rounded to a float, but is being compared to the double-precision value 1.2. The two are not equal.

like image 195
Stephen Canon Avatar answered Sep 27 '22 20:09

Stephen Canon


I suspect something like this: If you're working with a float variable and a double literal the whole operation will be done as double and then converted back to float.

If you use a float literal, notionally speaking the computation will be done at float precision even though some hardware will convert it to double anyway to do the calculation.

like image 42
Mark B Avatar answered Sep 27 '22 18:09

Mark B