Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between "1.0f" and "1.f"?

As a worshipper of John Carmack, I was reading Id Tech's publicly-available Coding Conventions document (at ftp://ftp.idsoftware.com/idstuff/doom3/source/CodeStyleConventions.doc if you're interested), and came across a convention I didn't entirely understand:

Use precision specification for floating point values unless there is an explicit need for a double.

float f = 0.5f

Instead of

float f = 0.5;

And

float f = 1.0f;

Instead of

float f = 1.f;

How are these different?

I can understand the difference between the two in the first example (the latter doing a double-to-float conversion under the hood), although I'd suspect it only needs a not-stupid compiler to catch and would produce the same bytecode, for no runtime difference.

But is there any case where adding trailing 0's to your floating point value declaration changes things?

like image 525
Ben Walker Avatar asked Feb 24 '13 02:02

Ben Walker


1 Answers

But is there any case where adding trailing 0's to your floating point value declaration changes things?

The only "thing" that adding a zero would change is readability. The resultant code would be exactly the same, because compilers do not care, but the constant with a zero in place would be easier to read by the human readers.

like image 140
Sergey Kalinichenko Avatar answered Oct 15 '22 12:10

Sergey Kalinichenko