I want to convert a float
to a unsigned long
, while keeping the binary representation of the float
(so I do not want to cast 5.0
to 5
!).
This is easy to do in the following way:
float f = 2.0;
unsigned long x = *((unsigned long*)&f)
However, now I need to do the same thing in a #define
, because I want to use this later on in some array initialization (so an [inline] function is not an option).
This does not compile:
#define f2u(f) *((unsigned long*)&f)
If I call it like this:
unsigned long x[] = { f2u(1.0), f2u(2.0), f2u(3.0), ... }
The error I get is (logically):
lvalue required as unary ‘&’ operand
Note: One solution that was suggested below was to use a union
type for my array. However, that's no option. I'm actually doing the following:
#define Calc(x) (((x & 0x7F800000) >> 23) - 127)
unsigned long x[] = { Calc(f2u(1.0)), Calc(f2u(2.0)), Calc(f2u(3.0)), ... }
So the array really will/must be of type long[]
.
You should probably use a union:
union floatpun {
float f;
unsigned long l;
};
union floatpun x[3] = { {1.0}, {2.0}, {3.0} };
or perhaps:
union {
float f[3];
unsigned long l[3];
} x = { { 1.0, 2.0, 3.0 } };
(The latter will let you pass x.l
where you need an array of type unsigned long [3]
).
Of course you need to ensure that unsigned long
and float
have the same size on your platform.
Note: One solution that was suggested below was to use a union type for my array. However, that's no option. I'm actually doing the following
#define Calc(x) (((x & 0x7F800000) >> 23) - 127) unsigned long x[] = { Calc(f2u(1.0)), Calc(f2u(2.0)), Calc(f2u(3.0)), ... }
So the array really will/must be of type long[].
In this case you won't probably be able to omit a step in-between.
unsigned float x[] = { 1.0, 2.0, 3.0, ...};
unsigned int y[sizeof x/sizeof x[0]];
for (i=0; i<sizeof x/sizeof x[0]; i++) {
y[i] = Calc(f2u(x[i]));
}
I admit it is not very elegant. But if you run into memory difficulties because of that (embedded sytem?), you can do this separately and automatically create a source file with the correct array.
EDIT:
Yet another solution would be to tell the compiler what you really want. Obviously, you want to calculate the exponent of a floating point number. So you could just do
#define expo(f) ((long)(log((f)) / log(2)))
That seems exactly to do what you intend to do.
And it seems to me that a signed char
woud be enough, and if not, a int16_t
.
lvalue
means something assignable.1.0
is a constant, not a variable, and you cannot get reference to it (neither assign to it).
Meaning:
This:
unsigned long x[3] = { f2u(1.0), f2u(2.0), f2u(3.0) }
Is actually:
unsigned long x[3] = { *((unsigned long*)&1.0, *((unsigned long*)&2.0, *((unsigned long*)&3.0 }
and 1.0
, 2.0
and 3.0
has no address.
The problem is not related to #define
as define is a simple substitution, This code is invalid as well:
unsigned long x = *((unsigned long*)&1.0;
The problem is that you are trying to reference to immediate values, which have no address.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With