Floating point is implementation defined in the C. So there isn't any guarantees.
Our code needs to be portable, we are discussing whether or not acceptable to use IEEE754 floats in our protocol. For performance reasons it would be nice if we don't have to convert back and forth between a fixed point format when sending or receiving data.
While I know that there can be differences between platforms and architectures regarding the size of long
or wchar_t
. But I can't seem to find any specific about the float
and double
.
What I found so far that the byte order maybe reversed on big endian platforms. While there are platforms without floating point support where a code containing float
and double
wouldn't even link. Otherwise platforms seem to stick to IEEE754 single and double precision.
So is it safe to assume that floating point is in IEEE754 when available?
EDIT: In response to a comment:
What is your definition of "safe"?
By safe I mean, the bit pattern on one system means the same on the another (after the byte rotation to deal with endianness).
There are three binary floating-point basic formats (encoded with 32, 64 or 128 bits) and two decimal floating-point basic formats (encoded with 64 or 128 bits). The binary32 and binary64 formats are the single and double formats of IEEE 754-1985 respectively.
Floating-point constants are positive unless they're preceded by a minus sign ( - ). In this case, the minus sign is treated as a unary arithmetic negation operator. Floating-point constants have type float , double , or long double . The Microsoft C compiler internally represents long double the same as type double .
IEEE Standard 754 floating point is the most common representation today for real numbers on computers, including Intel-based PC's, Macs, and most Unix platforms. This is as simple as the name. 0 represents a positive number while 1 represents a negative number.
There are three standard types in IEEE floating point arithmetic: single precision, double precision and extended precision.
Essentially all architectures in current non-punch-card use, including embedded architectures and exotic signal processing architectures, offer one of two floating point systems:
The most common cheap-outs:
a-b
can be exactly represented if a
and b
are within a factor of 2), but in practice it's generally not going to be an issue.inf
and NaN
as special. These architectures will fail to follow the rules regarding inf
and NaN
as operands, and may not saturate to inf
, instead producing numbers that are larger than FLT_MAX
, which will generally be recognized by other architectures as NaN
.BUUUUT... even those except for blah architectures still use IEEE-754's representation of numbers. Other than byte ordering issues, the bits describing a float
or double
on architecture A are essentially guaranteed to have the same meaning on architecture B.
So as long as all you care about is the representation of values, you're totally fine. If you care about cross-platform consistency of operations, you may need to do some extra work.
EDIT: As Chux mentions in the comments, a common extra source of inconsistency between platforms is the use of extended precision, such as the x87's 80-bit internal representation. That's the opposite of a cheap-out, and (with proper treatment) fully conforms to both IEEE-754 and the C standard, but it will likewise cause results to differ between architectures, and even between compiler versions and following apparently minor and unrelated code changes. However: a particular x86/x64 executable will NOT produce different results on different processors due to extended precision.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With