$a = '35'; $b = '-34.99'; echo ($a + $b);
Results in 0.009999999999998
What is up with that? I wondered why my program kept reporting odd results.
Why doesn't PHP return the expected 0.01?
Difference in Precision (Accuracy) float and double both have varying capacities when it comes to the number of decimal digits they can hold. float can hold up to 7 decimal digits accurately while double can hold up to 15.
float has 7 decimal digits of precision. double is a 64-bit IEEE 754 double precision Floating Point Number – 1 bit for the sign, 11 bits for the exponent, and 52* bits for the value. double has 15 decimal digits of precision.
The most commonly used double precision format stores the number with 53 bits of precision. This gives approximately 16 decimal digits of precision. You may find that long double gives more precision, but there is no guarantee that that is any larger than double .
Precision is defined as the number of significant digits, and scale is the number of digits behind the decimal point. This means that a number like “1.23E-1000” would require a scale of 1002 but a precision of 3.
Because floating point arithmetic != real number arithmetic. An illustration of the difference due to imprecision is, for some floats a
and b
, (a+b)-b != a
. This applies to any language using floats.
Since floating point are binary numbers with finite precision, there's a finite amount of representable numbers, which leads accuracy problems and surprises like this. Here's another interesting read: What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Back to your problem, basically there is no way to accurately represent 34.99 or 0.01 in binary (just like in decimal, 1/3 = 0.3333...), so approximations are used instead. To get around the problem, you can:
Use round($result, 2)
on the result to round it to 2 decimal places.
Use integers. If that's currency, say US dollars, then store $35.00 as 3500 and $34.99 as 3499, then divide the result by 100.
It's a pity that PHP doesn't have a decimal datatype like other languages do.
Floating point numbers, like all numbers, must be stored in memory as a string of 0's and 1's. It's all bits to the computer. How floating point differs from integer is in how we interpret the 0's and 1's when we want to look at them.
One bit is the "sign" (0 = positive, 1 = negative), 8 bits are the exponent (ranging from -128 to +127), 23 bits are the number known as the "mantissa" (fraction). So the binary representation of (S1)(P8)(M23) has the value (-1^S)M*2^P
The "mantissa" takes on a special form. In normal scientific notation we display the "one's place" along with the fraction. For instance:
4.39 x 10^2 = 439
In binary the "one's place" is a single bit. Since we ignore all the left-most 0's in scientific notation (we ignore any insignificant figures) the first bit is guaranteed to be a 1
1.101 x 2^3 = 1101 = 13
Since we are guaranteed that the first bit will be a 1, we remove this bit when storing the number to save space. So the above number is stored as just 101 (for the mantissa). The leading 1 is assumed
As an example, let's take the binary string
00000010010110000000000000000000
Breaking it into it's components:
Sign Power Mantissa 0 00000100 10110000000000000000000 + +4 1.1011 + +4 1 + .5 + .125 + .0625 + +4 1.6875
Applying our simple formula:
(-1^S)M*2^P (-1^0)(1.6875)*2^(+4) (1)(1.6875)*(16) 27
In other words, 00000010010110000000000000000000 is 27 in floating point (according to IEEE-754 standards).
For many numbers there is no exact binary representation, however. Much like how 1/3 = 0.333.... repeating forever, 1/100 is 0.00000010100011110101110000..... with a repeating "10100011110101110000". A 32-bit computer can't store the entire number in floating point, however. So it makes its best guess.
0.0000001010001111010111000010100011110101110000 Sign Power Mantissa + -7 1.01000111101011100001010 0 -00000111 01000111101011100001010 0 11111001 01000111101011100001010 01111100101000111101011100001010
(note that negative 7 is produced using 2's complement)
It should be immediately clear that 01111100101000111101011100001010 looks nothing like 0.01
More importantly, however, this contains a truncated version of a repeating decimal. The original decimal contained a repeating "10100011110101110000". We've simplified this to 01000111101011100001010
Translating this floating point number back into decimal via our formula we get 0.0099999979 (note that this is for a 32-bit computer. A 64-bit computer would have much more accuracy)
If it helps to understand the problem better, let's look decimal scientific notation when dealing with repeating decimals.
Let's assume that we have 10 "boxes" to store digits. Therefore if we wanted to store a number like 1/16 we would write:
+---+---+---+---+---+---+---+---+---+---+ | + | 6 | . | 2 | 5 | 0 | 0 | e | - | 2 | +---+---+---+---+---+---+---+---+---+---+
Which is clearly just 6.25 e -2
, where e
is shorthand for *10^(
. We've allocated 4 boxes for the decimal even though we only needed 2 (padding with zeroes), and we've allocated 2 boxes for signs (one for the sign of the number, one of the sign of the exponent)
Using 10 boxes like this we can display numbers ranging from -9.9999 e -9
to +9.9999 e +9
This works fine for anything with 4 or fewer decimal places, but what happens when we try to store a number like 2/3
?
+---+---+---+---+---+---+---+---+---+---+ | + | 6 | . | 6 | 6 | 6 | 7 | e | - | 1 | +---+---+---+---+---+---+---+---+---+---+
This new number 0.66667
does not exactly equal 2/3
. In fact, it's off by 0.000003333...
. If we were to try and write 0.66667
in base 3, we would get 0.2000000000012...
instead of 0.2
This problem may become more apparent if we take something with a larger repeating decimal, like 1/7
. This has 6 repeating digits: 0.142857142857...
Storing this into our decimal computer we can only show 5 of these digits:
+---+---+---+---+---+---+---+---+---+---+ | + | 1 | . | 4 | 2 | 8 | 6 | e | - | 1 | +---+---+---+---+---+---+---+---+---+---+
This number, 0.14286
, is off by .000002857...
It's "close to correct", but it's not exactly correct, and so if we tried to write this number in base 7 we would get some hideous number instead of 0.1
. In fact, plugging this into Wolfram Alpha we get: .10000022320335...
These minor fractional differences should look familiar to your 0.0099999979
(as opposed to 0.01
)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With