Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the significance of 0.0f when initializing (in C)?

Tags:

c

I've seen code where people initialize float variables like this:

float num = 0.0f; 

Is there a significant difference between this and just doing the following below?

float num = 0;  

Thanks.. :)

like image 817
O_O Avatar asked Mar 04 '11 21:03

O_O


People also ask

What is the meaning of 0.0 F?

'f' indicates that you want a float : 0 is an int. 0f is a float. 0.0 is a double. 0.0f is a float.

What does F after a number mean in C?

f means float. It's like a C programming language.

What is difference between float and double in C++?

double has 2x more precision than float. float is a 32-bit IEEE 754 single precision Floating Point Number – 1 bit for the sign, 8 bits for the exponent, and 23* for the value. float has 7 decimal digits of precision.


2 Answers

float x = 0 has an implicit typecast from int to float.
float x = 0.0f does not have such a typecast.
float x = 0.0 has an implicit typecast from double to float.

Depending on the compiler, implicit typecast can require the compiler to generate extra code.

like image 82
EvilTeach Avatar answered Oct 16 '22 09:10

EvilTeach


Probably the reason is that they once wrote something like:

float f = 1 / i; // i an integer 

Having debugged that, they swore always to decorate literals sufficiently to get the right type:

float f = 1.0f / i; 

In this case, the .0 is to ensure that the division is floating-point, not integer division. The f is because there's no need for the operation to be done in double -- I expect there's more chance of an implementation where 1.0/i would be significantly slower for no benefit (software float emulation, indifferent optimization), than one where 1.0f is significantly slower for no benefit (if double is faster than float that's because you have fp hardware, so conversion between the two will be very fast, so not introduce significant slowdown).

One you've got into the habit of decorating literals, you might well write:

float f = 0.0f; 

even though it has exactly the same effect as float f = 0.0; or float f = 0;.

Of course the author might not have gone through this revelation personally, they might just have inherited the style of someone else who did.

I'd just write 0.

R.. points out in a comment an another answer that writing 0 also has the benefit that when you change the type of f in future, you don't have to update the literal to match. And if the assignment is separate from the definition, then changing:

float f = something // some time later f = 0.1f; 

to:

double f = something; // some time later f = 0.1f; 

is probably a bug. Better to use 0.1 and let the compiler truncate to float if necessary. You could probably argue that using float at all is an optimization, for space if not for time, and the burden of dealing with any differences between float and double should be counted as a developer cost of performing that optimization.

like image 33
Steve Jessop Avatar answered Oct 16 '22 08:10

Steve Jessop