Here are two different ways I'm defining the same value. I want it to exist as a 64-bit (double precision) float point number (aka double).
#define THISVALUE -0.148759f
double myDouble = -0.148759;
If I perform the following operation
double tryingIt = THISVALUE;
and I look at the value during debugging or print it, I can see it assigns tryingIt
to -0.14875899255275726
I understand that a floating point is not exact but this is just a crazy difference that really throws off my math. Directly assigning the double as in top code block gives me a value of -0.14875900000000000 in the debugger - exactly what it should be.
Any thoughts on what's up?
Because -0.148759f
is not a double
, it's a float
. Hence it's almost certainly the differing precision which is making a difference.
Either of these two variations should give you identical results:
#define THISVALUE -0.148759
double myDouble = -0.148759; // Both double.
#define THISVALUE -0.148759f
double myDouble = -0.148759f; // Both float.
IEEE754 single precision values (commonly used in float
) have only 32 bits available to them so have limited range and precision compared to double precision values (which have 64 bits).
As per the Wikipedia page on IEEE754, rough figures for range and precision are:
singles
, range ±10±38
with 7 digits precision.doubles
, range ±10±308
with 15 digits precision.And, as an aside, there's nowhere near as much reason for using macros nowadays, either for functions or objects. The former can be done with the inline
suggestion and good compilers, the latter can be done with const int
(or const double
in your case) without losing any information between compilation stages (things like names and type information).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With