I am reading a C book, talking about ranges of floating point, the author gave the table:
Type Smallest Positive Value Largest value Precision ==== ======================= ============= ========= float 1.17549 x 10^-38 3.40282 x 10^38 6 digits double 2.22507 x 10^-308 1.79769 x 10^308 15 digits
I dont know where the numbers in the columns Smallest Positive and Largest Value come from.
Range is the minimum to maximum value supported for that datatype. Integers in C are of 16-bit . Signed int will be -32768 to 32767 i.e. (-2^15) to (2^15 -1) Unsigned int: 0 to 65535 i.e. 0 to (2^16)
A 32 bit floating point number has 23 + 1 bits of mantissa and an 8 bit exponent (-126 to 127 is used though) so the largest number you can represent is:
(1 + 1 / 2 + ... 1 / (2 ^ 23)) * (2 ^ 127) = (2 ^ 23 + 2 ^ 23 + .... 1) * (2 ^ (127 - 23)) = (2 ^ 24 - 1) * (2 ^ 104) ~= 3.4e38
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With