Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

CGFloat addition bug?

I was trying to add some CGFloat values recursively in my program. And I just realized in one particular scenario the total generated was incorrect. To ensure I had nothing wrong in my program logic, I created a simple example of that scenario (see below) and this printed the same wrong value.

CGFloat arr[3] = {34484000,512085280,143011440};
CGFloat sum = 0.0;
sum = arr[0] + arr[1] + arr[2];

NSLog(@"%f",sum);

int arr1[3] = {34484000,512085280,143011440};
int sum1 = 0.0;
sum1 =  arr1[0] + arr1[1] + arr1[2];

NSLog(@"%d",sum1);

The first NSLog prints 689580736.000000...while the correct result 689580720. However the second NSLog prints the correct result. I am not sure if this is a bug or if I am doing something wrong.

Thanks, Murali

like image 257
Murali Raghuram Avatar asked Sep 04 '11 21:09

Murali Raghuram


People also ask

What does CGFloat mean?

A CGFloat is a specialized form of Float that holds either 32-bits of data or 64-bits of data depending on the platform. The CG tells you it's part of Core Graphics, and it's found throughout UIKit, Core Graphics, Sprite Kit and many other iOS libraries.

Is CGFloat a double?

As @weichsel stated, CGFloat is just a typedef for either float or double . You can see for yourself by Command-double-clicking on "CGFloat" in Xcode — it will jump to the CGBase.

What is CGFloat in Objective C?

In Objective-C, we generally use CGFloat for doing floating point operation, which is derived from basic type of float in case of 32-bit and double in case of 64-bit.

What is the difference between the float double and CGFloat data types?

Suggested approach: It's a question of how many bits are used to store data: Float is always 32-bit, Double is always 64-bit, and CGFloat is either 32-bit or 64-bit depending on the device it runs on, but realistically it's just 64-bit all the time.


1 Answers

CGFloat is a single precision float on 32 bit targets such as iOS - it only has a 23 bit mantissa, i.e. around 6 - 7 significant digits. Use a double precision type if you need greater accuracy.

You should probably read David Goldberg's What Every Computer Scientist Should Know About Floating-Point Arithmetic before proceeding much further with learning to program.

like image 126
Paul R Avatar answered Oct 14 '22 05:10

Paul R