Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does the implementation of std::to_string create a buffer 4 times the size of the type?

The maximal number of binary digits of a N-decimal value is the ceil value of (N * log(10) / log(2)). A single decimal digit needs ceil(3.32) binary digits, That is 4.

For sizes of 8 bits it is:

Decimals = ceil(8 * Size / 3.32) = ceil(2.41 * Size).

For the sign (overhead and allocation) you get:

Decimals = 4 * Size.

Note: A conversion with snprintf of a single signed char needs 5 bytes (including the sign and the terminating zero). For values with a size greater than one byte, Decimals = 4 * Size provides a result, which is big enough.