I just calculated the same number in two ways, but in numpy, it makes an error
[[ 0.910221324013388510820732335560023784637451171875]] [[-0.9102213240133882887761274105287156999111175537109375]]
this number is the same up to e^(-15), but differs afterwards. How do I treat this error?
Is there any way to limit floating point accuracy?
Since I calculate exponential using these numbers, even little differences result in frustrating errors...
If you actually want to compute the result more precisely, you could try using the np. longdouble type for your input array, which, depending on your architecture and compiler, might give you an 80- or 128-bit floating point representation, rather than the standard 64-bit np. double *.
The bottom line is that numpy uses the default double precision floating point number, which gives you approximately 16 decimal places of precision on most 64 bit systems.
Using “%”:- “%” operator is used to format as well as set precision in python. This is similar to “printf” statement in C programming.
NumPy data types uint8 In Python uint8 datatype indicates unsigned integer and it consists of 8 bits with positive range values from 0 to 255. This datatype store information about the type byte order and bit-width with 'C'unsigned character.
Do you care about the actual precision of the result, or about getting the exact same digits back from your two calculations?
If you just want the same digits, you could use np.around()
to round the results to some appropriate number of decimal places. However, by doing this you'll only reduce the precision of the result.
If you actually want to compute the result more precisely, you could try using the np.longdouble
type for your input array, which, depending on your architecture and compiler, might give you an 80- or 128-bit floating point representation, rather than the standard 64-bit np.double
*.
You can compare the approximate number of decimal places of precision using np.finfo
:
print np.finfo(np.double).precision # 15 print np.finfo(np.longdouble).precision # 18
Note that not all numpy functions will support long double - some will down-cast it to double.
*However, some compilers (such as Microsoft Visual C++) will always treat long double
as synonymous with double
, in which case there would be no difference in precision between np.longdouble
and np.double
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With