Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is std::greater<double> and std::less<double> safe to use?

When comparing double values in C++, using the <,>,=,!= operators, we cannot always be sure about correctness of the result. Thats why we use other techniques to compare doubles, for example, we can compare two doubles a and b by testing if their difference is really close to zero. My question is, does the C++ standard library implements std::less<double> and std::greater<double> using these techniques, or it just uses the unsafe comparison operators?

like image 677
Rontogiannis Aristofanis Avatar asked Aug 06 '13 08:08

Rontogiannis Aristofanis


3 Answers

You can be a 100% sure about the correctness of the result of those operators. It's just that a prior calculation may have resulted in truncation because precision of a double is not endless. So the operators are perfectly fine, just your operands are not what you expected them to be.

So it does not matter what you use for comparison.

like image 179
nvoigt Avatar answered Oct 03 '22 06:10

nvoigt


They use standard operators. Here is the definition of std::greater in stl_function.h header file

  templatete<typename _Tp>
    struct greater : public binary_function<_Tp, _Tp, bool>
    {
      bool
      operator()(const _Tp& __x, const _Tp& __y) const
      { return __x > __y; }
    };
like image 24
cpp Avatar answered Oct 03 '22 07:10

cpp


operator< and operator> do give the correct result, at least as far as possible. However, there are some fundamental problems involved with using floating point arithmetics, especially double. These are not reduced by using the comparison functions you mention, as they are inherent to the floating point representation used by current CPUs.

As for the functions std::less / std::greater: They are just packaged versions of the standard operators, intended to be used when a binary predicate is needed in STL algorithms.

A double value has a 64 bit representation, whereas the Intel CPUs' original "double" arithmetic is done in 80 bits. Sounds good at first to get some more precision "for free", but it also means that the result depends on whether the compiler lets the code use intermediate results directly from the FPU registers (in 80 bits) or from the values written back to memory (rounded to 64 bit). This kind of optimization is completely up to the compiler and isn't defined by any standard.

To make things more complex, modern compilers can also make use of the newer vector instructions (MMX / SSE), which again are 64 bits only. The problems described above do not appear in this context. However, it depends on the compiler whether it makes use of these instructions for floating point arithmetics.

Comparisons for less/greater of almost equal values always will suffer when the difference is only in the last bits of the mantissa -- they are always subject to truncation errors, and you should make sure that your program does not critically rely on the result of a comparison of very close values. You can for example considering them equal when their difference is less than a threshold, e.g. by if (fabs(a - b)/a < factor*DBL_EPSILON) { /* EQUAL */ }. DBL_EPSILON is defined in float.h, and factor depends on how many mathematical operations with possible truncation/rounding have been made previously, and should be tested thoroughly. I've been safe with values around factor=16..32, but your mileage may vary.

like image 41
Piotr99 Avatar answered Oct 03 '22 06:10

Piotr99