I was calculating projections of normalized 2D points and accidentally I noticed that they were more accurate than when projecting the points without normalizing them. My code is in c++ and I compile with NDK for an android mobile which lacks of FPU (floating point unit).
Why do I gain accuracy in calculations with C++ when I first normalize the values so they are between 0 and 1?
Is it generally true in C++ that you gain accuracy in arithmetic if you work with variables that are between 0 and 1 or is it related to the case of compiling for an ARM device?
Standardization: Standardizing the features around the center and 0 with a standard deviation of 1 is important when we compare measurements that have different units. Variables that are measured at different scales do not contribute equally to the analysis and might end up creating a bais.
When we normalize a variable we first shift the scale so that it starts at 0, and then compress it so that it ends on 1. We do so by first subtracting the minimum value, and then divide by the new maximum value (which is the old max value minus the old min value).
You have a misunderstanding of precision. Precision is basically the number of bits available to you for representing the mantissa of your number.
You may find that you seem to have more digits after the decimal point if you keep the scale between 0 and 1 but that's not precision, which doesn't change at all based on the scale or sign.
For example, single precision has 23 bits of precision whether your number is 0.5 or 1e38. Double precision has 52 bits of precision.
See this answer for more details on IEEE754 bit-level representation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With