What is the complexity of converting a very large n-bit number to a decimal representation?
My thought is that the elementary algorithm of repeated integer division, taking the remainder to get each digit, would have O(M(n)log n)
complexity, where M(n)
is the complexity of the multiplication algorithm; however, the division is not between 2 n-bit numbers but rather 1 n-bit number and a small constant number, so it seems to me the complexity could be smaller.
Naive base-conversion as you described takes quadratic time; you do about n
bigint-by-smallint divisions, most of which take time linear in the size of the n-bit bigint.
You can do base conversion in O(M(n) log(n)) time, however, by picking a power of target-base that's roughly the square root of the to-be-converted number, doing divide-and-remainder by it (which can be done in O(M(n)) time via Newton's method), and recursing on the two halves.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With