Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the point of using 'f' when assigning a value to a CGFloat?

I see this all the time:

CGFloat someCGFloat = 1.2f;

Why is the 'f' used? If the CGFloat is defined as float, the value will be converted to a float, and if the CGFloat is defined as a double, the value will be converted to a double.

Is it just to make sure a conversion from double to float doesn't occur? What's the point of doing that? Also, wouldn't the compiler take care of this?

EDIT: Hmmm…which answer to accept…both are very good!

like image 469
fumoboy007 Avatar asked Mar 22 '13 23:03

fumoboy007


2 Answers

1.0 by default is double, if the right value is 1.2 there is an implicit cast, and the value gets casted from double to float (the cast isn't a runtime operation). In this case it's not important to call it 1.2f. Programmers mostly abuse it, but there are cases where it's really important.

For example:

float var= 1.0e-45;
NSLog(@"%d",var==1.0e-45);

This prints zero, because 1.0e-45 is too small to be stored into a single precision floating point variable, so it becomes equal to zero. Writing var==1.0e-45f changes the result.

Using format specifiers is important mostly when writing expressions, and since the left value is a float you expect that also the expression gets treated as a float, but that's not what happens.

A more striking case is when using the l format specifier on a number that gets shifted so much to become zero, and get surprised about the result:

long var= 1<<32;  // I assume that an int takes 4 bytes and a long 8 bytes

The result is zero, and writing 1l<<32 completely changes the result.

like image 97
Ramy Al Zuhouri Avatar answered Sep 29 '22 22:09

Ramy Al Zuhouri


In your snippet, just an assignment, you don't need the 'f' suffix, and in fact you shouldn't use it. If CGFloat is single precision (like in iOS) then your value will be stored single precision with or without the 'f' and if CGFloat is double precision (like on Mac OS) then you'll be unnecessarily creating a single precision value to be stored double precision.

On the other hand, if you're doing arithmetic you should be careful to use 'f' or not use it as appropriate. If you're working with single precision and include a literal like '1.2' without the 'f' then the compiler will promote the other operands to double precision. And if you're working with double precision and you include the 'f' then (like the assignment on Mac OS) you'll be creating a single precision value only to have it immediately converted to double.

like image 38
Aaron Golden Avatar answered Sep 29 '22 21:09

Aaron Golden