How to determine if a number, for example 1.577, can be precisely represented in float or double format?
It means it is real 1.577 not a 1.566999999999994324 etc.
EDIT: I'm looking for a tool, where I can type a number and it will display double/float representation of it. So it's not only c# related question.
If a floating point number literal ends with F of f, it's a float. Otherwise, it's a double.
float has 7 decimal digits of precision. double is a 64-bit IEEE 754 double precision Floating Point Number – 1 bit for the sign, 11 bits for the exponent, and 52* bits for the value. double has 15 decimal digits of precision.
Short answer: the max value for a double-precision value (assuming IEEE 754 floating-point) is exactly 2^1024 * (1 - 2^-53). For a single-precision value it's 2^128 * (1 - 2^-24).
float and double both have varying capacities when it comes to the number of decimal digits they can hold. float can hold up to 7 decimal digits accurately while double can hold up to 15.
You can use an online decimal to floating-point converter. For example, type in 1.577 and you get two indications that it is not exact:
1) The "Inexact" box is checked
2) It converts to 1.5769999999999999573674358543939888477325439453125 in double precision floating-point.
Contrast that to a number like 1.25, which prints as 1.25, and the "Inexact" box is NOT checked.
(That converter can also check single-precision numbers.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With