If I have this little function:
<script type="text/javascript">
function printFloat(){
var myFloatNumber1 = document.getElementById('floatNumber1');
var myFloatNumber2 = document.getElementById('floatNumber2');
alert(parseFloat(myFloatNumber1.value) + parseFloat(myFloatNumber2.value))
}
</script>
<input type="text" id="floatNumber1"></input>
<input type="text" id="floatNumber2"></input>
<input type="button" onclick="printFloat()"/>
in field 1 I enter: 221.58 in field 2 I enter: 2497.74
I expect the sum of 2 numbers in the input fields to be a 2 number digit: 2719.32 But the result is a incorrect number... : 2719.3199999999997
a round would do the job, but I just don't get why the code does that on this number... On other number combinations, the sum is correct...
To add two decimal numbers in JavaScript use the toFixed() function to convert it to a string with some decimal places shaved off, and then convert it back to a number.
The parseFloat() function is used to accept the string and convert it into a floating-point number. If the string does not contain a numeral value or If the first character of the string is not a Number then it returns NaN i.e, not a number.
parseInt is for converting a non integer number to an int and parseFloat is for converting a non float (with out a decimal) to a float (with a decimal). If your were to get input from a user and it comes in as a string you can use the parse method to convert it to a number that you can perform calculations on.
From The Floating-Point-Guide:
Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and instead I get a weird result like 0.30000000000000004?
Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.
In your case, the rounding errors happen when the values you entered are converted by parseFloat()
.
Why do other calculations like 0.1 + 0.4 work correctly?
In that case, the result (0.5) can be represented exactly as a floating-point number, and it’s possible for rounding errors in the input numbers to cancel each other out - But that can’t necessarily be relied upon (e.g. when those two numbers were stored in differently sized floating point representations first, the rounding errors might not offset each other).
In other cases like 0.1 + 0.3, the result actually isn’t really 0.4, but close enough that 0.4 is the shortest number that is closer to the result than to any other floating-point number. Many languages then display that number instead of converting the actual result back to the closest decimal fraction.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With