If I have the following expression:
byte A = 69;
int B = 123;
long C = 3210;
float D = 4.9f;
double E = 11.11;
double X = (B * 100) + 338.1 - (E / B) / C;
double X1 = (B * 100) + (A * D) - (E / B) / C;
// JAVA - lost precision
System.out.println(X); // 12638.099971861307
System.out.println(X1); // 12638.099581236307
// C# - almost the same
Console.WriteLine(X); // 12638.0999718613
Console.WriteLine(X1) // 12638.0999784417
I noticed that Java loses precision from X where 338.1 is implicit double while C# almost doesn't. I don't understand why because 338.1 is equals in float and in double. There is only one digit after the dot.
In Java, (B * 100) + (A * D)
will be a float; and it will be the float that is closest to 12638.1. However, 12638 requires 14 digits to express in binary, including the initial 1; which leaves 10 digits of the significand to express the fractional part. Therefore, you're going to get the closest number of 1024ths to 0.1 - which is 102/1024. This turns out to be 0.099609375 - so the float has a rounding error of 0.000390625.
That seems to be the difference between X and X1 that you're getting in your Java program.
I'm afraid I'm not a C# expert, so I can't tell you why C# is different.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With