As a worshipper of John Carmack, I was reading Id Tech's publicly-available Coding Conventions document (at ftp://ftp.idsoftware.com/idstuff/doom3/source/CodeStyleConventions.doc if you're interested), and came across a convention I didn't entirely understand:
Use precision specification for floating point values unless there is an explicit need for a double.
float f = 0.5f
Instead of
float f = 0.5;
And
float f = 1.0f;
Instead of
float f = 1.f;
How are these different?
I can understand the difference between the two in the first example (the latter doing a double-to-float conversion under the hood), although I'd suspect it only needs a not-stupid compiler to catch and would produce the same bytecode, for no runtime difference.
But is there any case where adding trailing 0's to your floating point value declaration changes things?
But is there any case where adding trailing 0's to your floating point value declaration changes things?
The only "thing" that adding a zero would change is readability. The resultant code would be exactly the same, because compilers do not care, but the constant with a zero in place would be easier to read by the human readers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With