Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How many bits of precision for a double between -1.0 and 1.0?

In some of the audio libraries I've been looking at, a sample of audio is often represented as a double or a float with a range of -1.0 to 1.0. In some cases, this easily allows analysis and synthesis code to abstract what the underlying datatype might actually be (signed long int, unsigned char, etc).

Assuming IEEE 754, we have non-uniform density. As the number approaches zero, the density increases. This means that we have less precision for numbers approaching -1 and 1.

This non-uniform number density doesn't matter if we can represent a sufficient number of values for the underlying datatype that we're converting to/from.

For instance, if the underlying data type was an unsigned char, we only need 256 values between -1 and 1 (or 8 bits)--using a double is clearly not a problem.

My question is: how many bits of precision do I have? Can I safely convert to/from an 32 bit integer without loss? To extend the question, what would the range of values have to be to safely convert to/from a 32 bit integer without loss?

Thanks!

like image 997
Brett Avatar asked Dec 17 '22 02:12

Brett


1 Answers

For IEEE doubles, you have a 53 bit mantissa, which is enough to represent 32 bit integers considered as fixed point numbers between -1 (0x80000000) and 1 - 2^-31 (0x7FFFFFFF).

Floats have 24 bit mantissae, which is not enough.

like image 141
Alexandre C. Avatar answered Feb 23 '23 01:02

Alexandre C.