Given an (unsigned) integer, what is the generally fastest way to convert it into a string that contains its decimal representation?
The naïve way of doing that is repeatedly dividing by 10, until you reach zero. I dislike this approach, because it
I thought of the following method to convert integers to decimal base. Is this a good idea? How is this done in common implementations of functions like printf ?
#include <stdint.h>
const static uint64_t i64_tab[20] = {
1u,
10u,
100u,
1000u,
10000u,
100000u, /* 10^ 5 */
1000000u,
10000000u,
100000000u,
1000000000u,
10000000000u, /* 10^10 */
100000000000u,
1000000000000u,
10000000000000u,
100000000000000u,
1000000000000000u, /* 10^15 */
10000000000000000u,
100000000000000000u,
1000000000000000000u,
10000000000000000000u /* 10^19 */
};
void uint64_to_string(char *out, uint64_t in) {
int i;
uint64_t tenpow;
char accum;
for (i = 19;i > 0;i--) {
if (in >= i64_tab[i]) break;
}
do {
tenpow = i64_tab[i];
accum = '0';
while (in >= tenpow) {
in -= tenpow;
accum++;
}
*out++ = accum;
} while (i --> 0);
*out = '\0';
}
const static uint32_t i32_tab[10] = {
1u,
10u,
100u,
1000u,
10000u,
100000u, /* 10^ 5 */
1000000u,
10000000u,
100000000u,
1000000000u, /* 10^9 */
};
void uint32_to_string(char *out, uint32_t in) {
int i;
uint32_t tenpow;
char accum;
for (i = 9;i > 0;i--)
if (in >= i32_tab[i]) break;
do {
tenpow = i32_tab[i];
accum = '0';
while (in >= tenpow) {
in -= tenpow;
accum++;
}
*out++ = accum;
} while (i --> 0);
*out = '\0';
}
The fastest approach on all but the simplest (e.g. 8-bit) microcontrollers is to use division, but reduce the number of divisions by generating several digits at once.
You will find very highly optimized code in answers to my question here. Using it in C should be a trivial edit to eliminate std::string -- there are no C++ features used in the actual conversion. The core is
while(val>=100)
{
int pos = val % 100;
val /= 100;
*(short*)(c-1)=*(short*)(digit_pairs+2*pos); // or use memcpy
c-=2;
}
while(val>0)
{
*c--='0' + (val % 10);
val /= 10;
}
I also provided an optimized division-free code for 8-bit micros, similar to the idea shown in the code in the question, but without loops. It ends up with a lot of code like this:
if (val >= 80) {
ch |= '8';
val -= 80;
}
else if (val >= 40) {
ch |= '4';
val -= 40;
}
if (val >= 20) {
ch |= '2';
val -= 20;
}
if (val >= 10) {
ch |= '1';
val -= 10;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With