Recently I came across a bug/feature in several languages. I have a very basic knowledge about how it's caused (and I'd like some detailed explanation), but when I think of all the bugs I must have made over the years, the question is how can I determine "Hey, this might cause a riddiculous bug, I'd better use arbitrary precision functions", what other languages do have this bug (and those who don't, why). Also, why 0.1+0.7 does this and i.e. 0.1+0.3 doesn't, are there any other well-known examples?
PHP
//the first one actually doesn't make any sense to me,
//why 7 after typecast if it's represented internally as 8?
debug_zval_dump((0.1+0.7)*10); //double(8) refcount(1)
debug_zval_dump((int)((0.1+0.7)*10)); //long(7) refcount(1)
debug_zval_dump((float)((0.1+0.7)*10)); //double(8) refcount(1)
Python:
>>> ((0.1+0.7)*10)
7.9999999999999991
>>> int((0.1+0.7)*10)
7
Javascript:
alert((0.1+0.7)*10); //7.999999999999999
alert(parseInt((0.7+0.1)*10)); //7
Ruby:
>> ((0.1+0.7)*10).to_i
=> 7
>>((0.1+0.7)*10)
=> 7.999999999999999
Because JavaScript uses the IEEE 754 standard for Math, it makes use of 64-bit floating numbers. This causes precision errors when doing floating point (decimal) calculations, in short, due to computers working in Base 2 while decimal is Base 10.
Some values cannot be exactly represented in a float data type. For instance, storing the 0.1 value in float (which is a binary floating point value) variable we get only an approximation of the value. Similarly, the 1/3 value cannot be represented exactly in decimal floating point type.
Floating-point decimal values generally do not have an exact binary representation. This is a side effect of how the CPU represents floating point data. For this reason, you may experience some loss of precision, and some floating-point operations may produce unexpected results.
A floating point number, is a positive or negative whole number with a decimal point. For example, 5.5, 0.25, and -103.342 are all floating point numbers, while 91, and 0 are not. Floating point numbers get their name from the way the decimal point can "float" to any position necessary.
What Every Computer Scientist Should Know About Floating-Point Arithmetic
It's not a language issue. It's general issue with float point arithmetic.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With