Is there an easy way to determine the sign of a floating point number?
I experimented and came up with this:
#include <iostream>
int main(int argc, char** argv)
{
union
{
float f;
char c[4];
};
f = -0.0f;
std::cout << (c[3] & 0x10000000) << "\n";
std::cin.ignore();
std::cin.get();
return 0;
}
where (c[3] & 0x10000000) gives a value > 0 for a negative number but I think this requires me to make the assumptions that:
Please correct me if any of those assumptions are wrong or if I have missed any.
Like scientific notation, floating-point numbers have a sign, mantissa (M), base (B), and exponent (E), as shown in Figure 5.27. For example, the number 4.1 × 103 is the decimal scientific notation for 4100. It has a mantissa of 4.1, a base of 10, and an exponent of 3.
A floating point number, is a positive or negative whole number with a decimal point. For example, 5.5, 0.25, and -103.342 are all floating point numbers, while 91, and 0 are not. Floating point numbers get their name from the way the decimal point can "float" to any position necessary.
A simple definition: A Floating Point number usually has a decimal point. This means that 0, 3.14, 6.5, and -125.5 are Floating Point numbers. Since Floating Point numbers represent a wide variety of numbers their precision varies.
Use signbit() from math.h.
Coming to this late, but I thought of another approach.
If you know your system uses IEEE754 floating-point format, but not how big the floating-point types are relative to the integer types, you could do something like this:
bool isFloatIEEE754Negative(float f)
{
float d = f;
if (sizeof(float)==sizeof(unsigned short int)) {
return (*(unsigned short int *)(&d) >> (sizeof(unsigned short int)*CHAR_BIT - 1) == 1);
}
else if (sizeof(float)==sizeof(unsigned int)) {
return (*(unsigned int *)(&d) >> (sizeof(unsigned int)*CHAR_BIT - 1) == 1);
}
else if (sizeof(float)==sizeof(unsigned long)) {
return (*(unsigned long *)(&d) >> (sizeof(unsigned long)*CHAR_BIT - 1) == 1);
}
else if (sizeof(float)==sizeof(unsigned char)) {
return (*(unsigned char *)(&d) >> (sizeof(unsigned char)*CHAR_BIT - 1) == 1);
}
else if (sizeof(float)==sizeof(unsigned long long)) {
return (*(unsigned long long *)(&d) >> (sizeof(unsigned long long)*CHAR_BIT - 1) == 1);
}
return false; // Should never get here if you've covered all the potential types!
}
Essentially, you treat the bytes in your float as an unsigned integer type, then right-shift all but one of the bits (the sign bit) out of existence. '>>' works regardless of endianness so this bypasses that issue.
If it's possible to determine pre-execution which unsigned integer type is the same length as the floating point type, you could abbreviate this:
#define FLOAT_EQUIV_AS_UINT unsigned int // or whatever it is
bool isFloatIEEE754Negative(float f)
{
float d = f;
return (*(FLOAT_EQUIV_AS_UINT *)(&d) >> (sizeof(FLOAT_EQUIV_AS_UINT)*CHAR_BIT - 1) == 1);
}
This worked on my test systems; anyone see any caveats or overlooked 'gotchas'?
Try
float s = copysign(1, f);
from <math.h>
Another helpful thing may be #including <ieee754.h>
, if it's available on your system/compiler.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With