int length = (int) floor( log10 (float) number ) + 1;
My question is essentially a math question: WHY does taking the log10() of a number, flooring that number, adding 1, and then casting it into an int correctly calculate the length of number?
I really want to know the deep mathematical explanation please!
The formula will be integer of (log10(number) + 1). For an example, if the number is 1245, then it is above 1000, and below 10000, so the log value will be in range 3 < log10(1245) < 4. Now taking the integer, it will be 3. Then add 1 with it to get number of digits.
If it bothers you that I'm off by one, because clearly 100 has 3 digits, not 2, you can get the actual number of digits by rounding away from 0 to the next integer.
For an integer number
that has n
digits, it's value is between 10^(n - 1)
(included) and 10^n
, and so log10(number)
is between n - 1
(included) and n
. Then the function floor
cuts down the fractional part, leaves the result as n - 1
. Finally, adding 1
to it gives the number of digits.
Consider that a four-digit number x is somewhere between 1000 <= x < 10000
. Taking the log base 10 of all three components gives 3.000 <= log(x, 10) < 4.000
. Taking the floor (or int) of each component and adding one gives 4 <= int(log(x, 10))+1 <= 4
.
Ignoring round-off error, this gives you the number of digits in x.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With