I'm working on a cryptography application ,using Crypto++
As an obscure part of this application, I need to determine the maximum number of unique float values that can exist within a certain numeric range.
Obviously there are infinite numbers between 0
and 1
in reality - but not all of them can be represented by a unique float value.
I have a minimum float value, and a maximum float value.
I need to determine the number of possible float values inside this range.
This is tricky, because floating-point values are spaced further apart, the farther you get from 0
.
For example, the number of possible floating-point values between 0
and 1
, is very different from the number of floating-point values between 100,000
and 100,001
For my purposes, I want the count to include both the min and max values as well.
But an algorithm that produces an exclusive count would be just as useful, as I could simply add 1
or 2
as needed.
Additional concern:
What if 0
is within the range?
For example, if the minimum value is -2.0, and the maximum value is positive 2.0, I don't want to count 0
twice (once for 0
, and again for -0
).
Additionally, what problems would arise if the minimum or maximum values are +/-infinity?
(I'll probably just throw an exception if the minimum or maximum are NaN).
uint32_t RangeValueCount ( float fMin , float fMax )
{
if ( fMin > fMax )
swap ( fMin , fMax ) ; // Ensure fMin <= fMax
// Calculate the number of possible floating-point values between fMin and fMax.
return ( *reinterpret_cast < uint32_t* > ( &fMax ) -
*reinterpret_cast < uint32_t* > ( &fMin ) ) + 1 ;
// This algorithm is obviously unsafe, assumes IEEE 754
// How should I account for -0 or infinity?
}
If this problem can be solved, I presume the solution would be equally applicable to double values (and possibly long double values, but that might be a little more complex due to 80-bit integer values etc.)
The three floating point types differ in how much space they use (32, 64, or 80 bits on x86 CPUs; possibly different amounts on other machines), and thus how much precision they provide.
To explicitly answer your question, that means for a 32 bit float, the largest value is (-1)^0 * 1.99999988079071044921875 * 2^128 , which is 6.8056469327705771962340836696903385088 × 10^38 according to Wolfram. The smallest value is the negative of that.
Over the entire typical float range, about 232 different values can be represented.
Here is code that handles all finite numbers. It expects IEEE 754 arithmetic. I have replaced my previous version with simpler, cleaner code. Instead of two implementations of the distance calculation, this has two implementations of converting a floating-point number to its encoding (one by copying the bits, one by mathematically manipulating it). After that, the distance calculation is fairly simple (negative values have to be adjusted, and then the distance is simply a subtraction).
#include <ctgmath>
#include <cstdint>
#include <cstdlib>
#include <iostream>
#include <limits>
typedef double Float; // The floating-point type to use.
typedef std::uint64_t UInt; // Unsigned integer of same size as Float.
/* Define a value with only the high bit of a UInt set. This is also the
encoding of floating-point -0.
*/
static constexpr UInt HighBit
= std::numeric_limits<UInt>::max() ^ std::numeric_limits<UInt>::max() >> 1;
// Return the encoding of a floating-point number by copying its bits.
static UInt EncodingBits(Float x)
{
UInt result;
std::memcpy(&result, &x, sizeof result);
return result;
}
// Return the encoding of a floating-point number by using math.
static UInt EncodingMath(Float x)
{
static constexpr int SignificandBits = std::numeric_limits<Float>::digits;
static constexpr int MinimumExponent = std::numeric_limits<Float>::min_exponent;
// Encode the high bit.
UInt result = std::signbit(x) ? HighBit : 0;
// If the value is zero, the remaining bits are zero, so we are done.
if (x == 0) return result;
/* The C library provides a little-known routine to split a floating-point
number into a significand and an exponent. Note that this produces a
normalized significand, not the actual significand encoding. Notably,
it brings significands of subnormals up to at least 1/2. We will
adjust for that below. Also, this routine normalizes to [1/2, 1),
whereas IEEE 754 is usually expressed with [1, 2), but that does not
bother us.
*/
int xe;
Float xf = std::frexp(fabs(x), &xe);
// Test whether the number is subnormal.
if (xe < MinimumExponent)
{
/* For a subnormal value, the exponent encoding is zero, so we only
have to insert the significand bits. This scales the significand
so that its low bit is scaled to the 1 position and then inserts it
into the encoding.
*/
result |= (UInt) std::ldexp(xf, xe - MinimumExponent + SignificandBits);
}
else
{
/* For a normal value, the significand is encoded without its leading
bit. So we subtract .5 to remove that bit and then scale the
significand so its low bit is scaled to the 1 position.
*/
result |= (UInt) std::ldexp(xf - .5, SignificandBits);
/* The exponent is encoded with a bias of (in C++'s terminology)
MinimumExponent - 1. So we subtract that to get the exponent
encoding and then shift it to the position of the exponent field.
Then we insert it into the encoding.
*/
result |= ((UInt) xe - MinimumExponent + 1) << (SignificandBits-1);
}
return result;
}
/* Return the encoding of a floating-point number. For illustration, we
get the encoding with two different methods and compare the results.
*/
static UInt Encoding(Float x)
{
UInt xb = EncodingBits(x);
UInt xm = EncodingMath(x);
if (xb != xm)
{
std::cerr << "Internal error encoding" << x << ".\n";
std::cerr << "\tEncodingBits says " << xb << ".\n";
std::cerr << "\tEncodingMath says " << xm << ".\n";
std::exit(EXIT_FAILURE);
}
return xb;
}
/* Return the distance from a to b as the number of values representable in
Float from one to the other. b must be greater than or equal to a. 0 is
counted only once.
*/
static UInt Distance(Float a, Float b)
{
UInt ae = Encoding(a);
UInt be = Encoding(b);
/* For represented values from +0 to infinity, the IEEE 754 binary
floating-points are in ascending order and are consecutive. So we can
simply subtract two encodings to get the number of representable values
between them (including one endpoint but not the other).
Unfortunately, the negative numbers are not adjacent and run the other
direction. To deal with this, if the number is negative, we transform
its encoding by subtracting from the encoding of -0. This gives us a
consecutive sequence of encodings from the greatest magnitude finite
negative number to the greatest finite number, in ascending order
except for wrapping at the maximum UInt value.
Note that this also maps the encoding of -0 to 0 (the encoding of +0),
so the two zeroes become one point, so they are counted only once.
*/
if (HighBit & ae) ae = HighBit - ae;
if (HighBit & be) be = HighBit - be;
// Return the distance between the two transformed encodings.
return be - ae;
}
static void Try(Float a, Float b)
{
std::cout << "[" << a << ", " << b << "] contains "
<< Distance(a,b) + 1 << " representable values.\n";
}
int main(void)
{
if (sizeof(Float) != sizeof(UInt))
{
std::cerr << "Error, UInt must be an unsigned integer the same size as Float.\n";
std::exit(EXIT_FAILURE);
}
/* Prepare some test values: smallest positive (subnormal) value, largest
subnormal value, smallest normal value.
*/
Float S1 = std::numeric_limits<Float>::denorm_min();
Float N1 = std::numeric_limits<Float>::min();
Float S2 = N1 - S1;
// Test 0 <= a <= b.
Try( 0, 0);
Try( 0, S1);
Try( 0, S2);
Try( 0, N1);
Try( 0, 1./3);
Try(S1, S1);
Try(S1, S2);
Try(S1, N1);
Try(S1, 1./3);
Try(S2, S2);
Try(S2, N1);
Try(S2, 1./3);
Try(N1, N1);
Try(N1, 1./3);
// Test a <= b <= 0.
Try(-0., -0.);
Try(-S1, -0.);
Try(-S2, -0.);
Try(-N1, -0.);
Try(-1./3, -0.);
Try(-S1, -S1);
Try(-S2, -S1);
Try(-N1, -S1);
Try(-1./3, -S1);
Try(-S2, -S2);
Try(-N1, -S2);
Try(-1./3, -S2);
Try(-N1, -N1);
Try(-1./3, -N1);
// Test a <= 0 <= b.
Try(-0., +0.);
Try(-0., S1);
Try(-0., S2);
Try(-0., N1);
Try(-0., 1./3);
Try(-S1, +0.);
Try(-S1, S1);
Try(-S1, S2);
Try(-S1, N1);
Try(-S1, 1./3);
Try(-S2, +0.);
Try(-S2, S1);
Try(-S2, S2);
Try(-S2, N1);
Try(-S2, 1./3);
Try(-N1, +0.);
Try(-N1, S1);
Try(-N1, S2);
Try(-N1, N1);
Try(-1./3, 1./3);
Try(-1./3, +0.);
Try(-1./3, S1);
Try(-1./3, S2);
Try(-1./3, N1);
Try(-1./3, 1./3);
return 0;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With