Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

When is it better to use an NSDecimal, NSDecimalNumber instead of a double?

For simple uses, such as tracking weight values like 65.1kg, is there any benefit of going with NSDecimal/NSDecimalNumber over double?

My understanding here is double (or even float) provides more than enough precision in such cases. Please correct me if I'm wrong.

like image 444
Gaurav Sharma Avatar asked Dec 19 '22 22:12

Gaurav Sharma


1 Answers

First, read Josh Caswell's link. It it especially critical when working with money. In your case it may matter or may not, depending on your goal. If you put in 65.1 and you want to get exactly 65.1 back out, then you definitely need to use a format that rounds properly to decimal values like NSDecimalNumber. If, when you put in 65.1, you want "a value that is within a small error of 65.1," then float or double are fine (depending on how much error you are willing to accept).

65.1 is a great example, because it demonstrates the problem. Here in Swift because its so easy to demonstrate, but ObjC is the same:

 1> 65.1
$R0: Double = 65.099999999999994
  2>

1/10 happens to be a repeating decimal in binary, just like 1/3 is a repeating decimal in decimal. So 65.1 encoded as a double is "close to" 65.1, but not exact. If you need an exact representation of decimal-encoded number (i.e. what most humans expect), use NSDecimalNumber. This isn't to say that NSDecimalNumber is more accurate than double. It just imposes different rounding errors than double. Which rounding errors you prefer depends on your use case.

like image 119
Rob Napier Avatar answered Dec 24 '22 01:12

Rob Napier