Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does multiplying and dividing by N "fix" floating point representation?

I am working in JavaScript, but the problem is generic. Take this rounding error:

>> 0.1 * 0.2
0.020000000000000004

This StackOverflow answer provides a nice explanation. Essentially, certain decimal numbers cannot be represented as precisely in binary. This is intuitive, since 1/3 has a similar problem in base-10. Now a work around is this:

>> (0.1 * (1000*0.2)) / 1000
0.02

My question is how does this work?

like image 275
jds Avatar asked Jun 11 '14 20:06

jds


2 Answers

It doesn't work. What you see there is not exactly 0.02, but a number that is close enough (to 15 significant decimal digits) to look like it.

It just happens that multiplying an operand by 1000, then dividing the result by 1000, results in rounding errors that yield an apparently "correct" result.

You can see the effect for yourself in your browser's Console. Convert numbers to binary using Number.toString(2) and you'll see the difference:

Console showing <code>0.1</code>, <code>0.2</code>, <code>0.1*0.2</code> and <code>((0.1*(0.2*1000))/1000</code> each with their binary representations

Correlation does not imply causation.

like image 151
Niet the Dark Absol Avatar answered Nov 15 '22 01:11

Niet the Dark Absol


It doesn't. Try 0.684 and 0.03 instead and this trick actually makes it worse. Or 0.22 and 0.99. Or a huge number of other things.

like image 25
tmyklebu Avatar answered Nov 15 '22 00:11

tmyklebu