I am attempting to make an adaptive 'about equal' method (written in C# but the question is general) accepting two doubles and returning a boolean if they are 'about equal' or not. By adaptive, I mean that:
1.234 and 1.235 ==> TRUE
BUT
1.234567 and 1.234599 ==> FALSE
That is, the precission of 'about equal' adapts to the precission of the numbers.
I found a concept of rounding at How do I find if two variables are approximately equals? but there is still the open ended question of what to use for epsilon.
Is anyone aware of best practices for this sort of problem? Thanks in advance!
EDIT: My initial question did not contain enough information as to what I was trying to get. Sorry about that and my apologies. I want a program that would treat higher precission numbers to a higher standard while being more lenient with lower precission numbers. More examples of pairs would be (where '(0)' is an implied zero):
1.077 and 1.07(0) returns false (because 77 is very different from 70)
1.000077 and 1.00007(0) returns false (because 77 is very different from 70)
1.071 and 1.07(0) returns true (because 71 is close to 70
1.000071 and 1.00007(0) returns true (because 71 is close to 70)
Regardless of the implementing code, I assume there will be a 'tolerance' variable of some sort to determine what is 'very different' and what is 'close'.
One way to compare floating point numbers is to compare how many floating point representations that separate them. This solution is indifferent to the size of the numbers and thus you don't have to worry about the "epsilon".
A description of the algorithm can be found here (the AlmostEqual2sComplement function in the end) and here is my C# version of it.
public static class DoubleComparerExtensions
{
public static bool AlmostEquals(this double left, double right, long representationTolerance)
{
long leftAsBits = left.ToBits2Complement();
long rightAsBits = right.ToBits2Complement();
long floatingPointRepresentationsDiff = Math.Abs(leftAsBits - rightAsBits);
return (floatingPointRepresentationsDiff <= representationTolerance);
}
private static unsafe long ToBits2Complement(this double value)
{
double* valueAsDoublePtr = &value;
long* valueAsLongPtr = (long*)valueAsDoublePtr;
long valueAsLong = *valueAsLongPtr;
return valueAsLong < 0
? (long)(0x8000000000000000 - (ulong)valueAsLong)
: valueAsLong;
}
}
If you'd like to compare floats, change all double
to float
, all long
to int
and 0x8000000000000000
to 0x80000000
.
float
and double
do not have precision in the way you are thinking. Programs fake precision by truncating trailing zeroes...but you can't make use of this trick for your purposes, since rounding errors will prevent such a trick from being reliable.
decimal
does keep track of how many digits to place to the right of the decimal, but this is still worthless for implementing the algorithm you propose, as any an operation which introduces a representational error (e.g., dividing by 3) will tend to max out the the number of digits to the right of the decimal.
If you want to actually have fuzzy equality based on the known precision of your data, one way to do it would be to create your own number class. Something like this:
public class DoublePrecise
{
public readonly double Error;
public readonly double Value;
public DoublePrecise(double error, double value) {
Error = error;
Value = value;
}
public static DoublePrecise operator -(DoublePrecise d1, DoublePrecise d2) {
return new DoublePrecise(d1.Value - d2.Value, d1.Error + d2.Error);
}
//More stuff.
}
Basically, this is letting your represent numbers like 10.0±0.1. In that situation, you would treat two numbers as being approximately equal if their ranges overlap (though in reality, this would make your equality operation mean, "could be equal" and your inequality operation mean, "definitely not equal."
See also, Interval Arithmetic
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With