Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Getting weird value in Double

Hello i made a "Clicker" as a first project while learning swift i have an automated timer that is supposed to remove some numbers from other numbers but sometimes i get values like 0.600000000000001 and i have no idea why.

Here is my "Attack" function that removes 0.2 from the Health of a zombie.

let fGruppenAttackTimer = NSTimer.scheduledTimerWithTimeInterval(1, target: self, selector: Selector("fGruppenAttackTime"), userInfo: nil, repeats: true)

func fGruppenAttackTime() {
    zHealth -= 0.2 
    if zHealth <= 0 {
        zHealth = zSize
        pPengar += pPengarut
    }

    ...
}

And here is my attackZ button that is supposed to remove 1 from the health of the zombie

@IBAction func attackZ(sender: UIButton) {
    zHealth -= Double(pAttack)
    fHunger -= 0.05
    fGruppenHunger.progress = Float(fHunger / 100)
    Actionlbl.text = ""
    if zHealth <= 0 {
        zHealth = zSize
        pPengar += pPengarut
    }
}

Lastly here are the variables value:

var zHealth = 10.0
var zSize = 10.0
var pAttack = 1
var pPengar = 0
var pPengarut = 1

When the timer is on and the function is running and i click the button i sometimes get weird values like 0.600000000000001 and if i set the 0.2 in the function to 0.25 i get 0.0999999999999996 sometimes. I wonder why this happens and what to do with it.

like image 236
LillKakaN Avatar asked Dec 15 '22 11:12

LillKakaN


2 Answers

In trojanfoe's answer, he shares a link that describes the source of the problem regarding rounding of floating point numbers.

In terms of what to do, there are a number of approaches:

  1. You can shift to integer types. For example, if your existing values can all be represented with a maximum of two decimal places, multiply those by 100 and then use Int types everywhere, excising the Double and Float representations from your code.

  2. You can simply deal with the very small variations that Double type introduces. For example:

    • If displaying the results in the UI, use NumberFormatter to convert the Double value to a String using a specified number of decimal places.

      let formatter = NumberFormatter()
      formatter.maximumFractionDigits = 2
      formatter.minimumFractionDigits = 0  // or you might use `2` here, too
      formatter.numberStyle = .decimal
      
      print(formatter.string(for: value)!)
      

      By the way, the NSNumberFormatter enjoys another benefit, too, namely that it honors the localization settings for the user. For example, if the user lives in Germany, where the decimal place is represented with a , rather than a ., the NSNumberFormatter will use the user's native number formatting.

    • When testing to see if a number is equal to some value, rather than just using == operator, look at the difference between two values and seeing if they're within some permissible rounding threshold.

  3. You can use Decimal/NSDecimalNumber, which doesn't suffer from rounding issues when dealing with decimals:

    var value = Decimal(string: "1.0")!
    value -= Decimal(string: "0.9")!
    value -= Decimal(string: "0.1")!
    

    Or:

    var value = Decimal(1)
    value -= Decimal(sign: .plus, exponent: -1, significand: 9)
    value -= Decimal(sign: .plus, exponent: -1, significand: 1)
    

    Or:

    var value = Decimal(1)
    value -= Decimal(9) / Decimal(10)
    value -= Decimal(1) / Decimal(10)
    

    Note, I explicitly avoid using any Double values such as Decimal(0.1) because creating a Decimal from a fractional Double only captures whatever imprecision Double entails, where as the three examples above avoid that entirely.

like image 178
Rob Avatar answered Dec 27 '22 20:12

Rob


It's because of floating point rounding errors.

For further reading, see What Every Computer Scientist Should Know About Floating-Point Arithmetic.

Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation.

like image 24
trojanfoe Avatar answered Dec 27 '22 21:12

trojanfoe