Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Same calculation on Linux and Windows --> different results

I have coded following algorithm to convert a decimal value into Binary/Hexadecimal etc..

string toFormatFromDecimal(long long t, Format format) {
    int digitCount = ceil(log(t) / log((int) format));
    string hex = "";
    for (int i = 0; i < digitCount; i++) {
        long double cur = (long double)t / (long double)(format);
        long long ganzzahl = (long long) cur;
        long double kommazahl = cur - ganzzahl;
        hex += digits[(long long) (kommazahl * format)];
        t = ganzzahl;
    }
    return string(hex.rbegin(), hex.rend());
}

I use GCC in linux and Visual Studio c++ compiler on Windows It seems that i got different values at the "integer" Division here:

long long ganzzahl = (long long) cur;

Any Idea how this could happen? are there different precissions on Linux and Windows?

Thanks Florian

--Solution--

string toFormatFromDecimal(long long t, Format format) {
    int digitCount = ceil(log(t) / log((int) format));
    string hex = "";
    for (int i = 0; i < digitCount; i++) {
        hex += digits[(int) (t%format)];
        t = t/format;
    }
    return string(hex.rbegin(), hex.rend());
}
like image 688
user2071938 Avatar asked May 24 '13 22:05

user2071938


1 Answers

Yes, GCC and Visual Studio C++ have different long double types. On GCC generating code for x86, long double is a 80-bit double-extended IEEE 754 format(*), whereas Visual Studio C++ treats long double like a 64-bit double-precision IEEE 754 format(**).

So (long double)t does not have to be the same number on both platforms, and the division is not the same either. Although you have tagged your question “integer-division”, it is a floating-point division between different floating-point types.

(*) almost: it behaves very very much like a 79-bit IEEE 754 type with 15 exponent bits and 63 significand bits would, but it has a slightly wider exponent range since it uses an explicit bit for the leading 1 in the significand.

(**) almost: because the compiler generates instructions that use the historical x87 instructions after having configured the x87 for 53-bit significands, denormal results may be double-rounded (reference).

like image 110
Pascal Cuoq Avatar answered Nov 05 '22 08:11

Pascal Cuoq