In a very high performance app we find the the CPU can calculate long arithmetic significantly faster then with doubles. However, in our system it was determined that we never need more then 9 decimal places of precision. So we using longs for all floating point arithmetic with a 9 point precision understood.
However, in certain parts of the system it is more convenient due to readability to work with doubles. So we have to convert between the long value that assumes 9 decimal places into double.
We find the simply taking the long and dividing by 10 to the power of 9 or multiplying by 1 divided by 10 to the power of 9 gives imprecise representations in a double.
To solve that we using the Math.Round(value,9)
to give the precise values.
However, Math.Round()
is horrifically slow for performance.
So our idea at the moment is to directly convert the mantissa and exponent to the binary format of a double since--in that way, there will be zero need for rounding.
We have learned online how to examine bits of a double to get the mantissa and exponent but it's confusing to figure out how to reverse that to take a mantissa and exponent and fabricate a double by using the bits.
Any suggestions?
[Test]
public unsafe void ChangeBitsInDouble()
{
var original = 1.0D;
long bits;
double* dptr = &original;
//bits = *(long*) dptr;
bits = BitConverter.DoubleToInt64Bits(original);
var negative = (bits < 0);
var exponent = (int) ((bits >> 52) & 0x7ffL);
var mantissa = bits & 0xfffffffffffffL;
if( exponent == 0)
{
exponent++;
}
else
{
mantissa = mantissa | (1L << 52);
}
exponent -= 1075;
if( mantissa == 0)
{
return;
}
while ((mantissa & 1) == 0)
{
mantissa >>= 1;
exponent++;
}
Console.WriteLine("Mantissa " + mantissa + ", exponent " + exponent);
}
You shouldn't use a scale factor of 10^9, you should use 2^30 instead.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With