I've seen code where people initialize float variables like this:
float num = 0.0f;
Is there a significant difference between this and just doing the following below?
float num = 0;
Thanks.. :)
'f' indicates that you want a float : 0 is an int. 0f is a float. 0.0 is a double. 0.0f is a float.
f means float. It's like a C programming language.
double has 2x more precision than float. float is a 32-bit IEEE 754 single precision Floating Point Number – 1 bit for the sign, 8 bits for the exponent, and 23* for the value. float has 7 decimal digits of precision.
float x = 0 has an implicit typecast from int to float.
float x = 0.0f does not have such a typecast.
float x = 0.0 has an implicit typecast from double to float.
Depending on the compiler, implicit typecast can require the compiler to generate extra code.
Probably the reason is that they once wrote something like:
float f = 1 / i; // i an integer
Having debugged that, they swore always to decorate literals sufficiently to get the right type:
float f = 1.0f / i;
In this case, the .0
is to ensure that the division is floating-point, not integer division. The f
is because there's no need for the operation to be done in double
-- I expect there's more chance of an implementation where 1.0/i
would be significantly slower for no benefit (software float emulation, indifferent optimization), than one where 1.0f
is significantly slower for no benefit (if double is faster than float that's because you have fp hardware, so conversion between the two will be very fast, so not introduce significant slowdown).
One you've got into the habit of decorating literals, you might well write:
float f = 0.0f;
even though it has exactly the same effect as float f = 0.0;
or float f = 0;
.
Of course the author might not have gone through this revelation personally, they might just have inherited the style of someone else who did.
I'd just write 0
.
R.. points out in a comment an another answer that writing 0 also has the benefit that when you change the type of f
in future, you don't have to update the literal to match. And if the assignment is separate from the definition, then changing:
float f = something // some time later f = 0.1f;
to:
double f = something; // some time later f = 0.1f;
is probably a bug. Better to use 0.1
and let the compiler truncate to float if necessary. You could probably argue that using float
at all is an optimization, for space if not for time, and the burden of dealing with any differences between float and double should be counted as a developer cost of performing that optimization.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With