Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Find out the largest native integer type on the current platform

The problem I have is to create a sort of big integer library. I want to make it both cross platform and as fast as possible. This means that I should try to do math with as large data types as are natively supported on the system.

I don't actually want to know whether I am compiling for a 32 bit or 64 bit system; all I need is a way to create a 64 bit or 32 bit or whatever bit integer based on what is the largest available. I will be using sizeof to behave differently depending on what that is.

Here are some possible solutions and their problems:

Use sizeof(void*): This gives the size of a pointer to memory. It is possible (though unlikely) that a system may have larger pointers to memory than it is capable of doing math with or vice versa.

Always use long: While it is true that on several platforms long integers are either 4 bytes or 8 bytes depending on the architecture (my system is one such example), some compilers implement long integers as 4 bytes even on 64 bit systems.

Always use long long: On many 32 bit systems, this is a 64 bit integer, which may not be as efficient (though probably more efficient than whatever code I may be writing). The real problem with this is that it may not be supported at all on some architectures (such as the one powering my mp3 player).

To emphasize, my code does not care what the actual size of the integer is once it has been chosen (it relies on sizeof() for anything where the size matters). I just want it to choose the type of integer that will cause my code to be most efficient.

like image 220
Talia Avatar asked Dec 28 '10 23:12

Talia


1 Answers

If you really want a native-sized type, I would use size_t, ptrdiff_t, or intptr_t and uintptr_t. On any non-pathological system, these are all going to be the native word size.

On the other hand, there are certainly benefits in terms of simplicity to always working with a fixed size, in which case I would just use int32_t or uint32_t. The reason I say it's simpler is that you often end up needing to know things like "the largest power of 10 that fits in the type" (for decimal conversion) and other constants that cannot be easily expressed as constant expressions in terms of the type you've used. If you just pick a fixed number of bits, you can also fix the convenient constants (like 1000000000 in my example). Of course by doing it this way, you do sacrifice some performance on higher-end systems. You could take the opposite approach and use a larger fixed size (64 bits), which would be optimal on higher-end systems, and assume that the compiler's code for 64-bit arithmetic on 32-bit machines will be at least as fast as your bignum code handling 2 32-bit words, in which case it's still optimal.

like image 73
R.. GitHub STOP HELPING ICE Avatar answered Nov 08 '22 22:11

R.. GitHub STOP HELPING ICE