Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does a 32 bit processor support 64 bit integers?

Tags:

c++

integer

In C++, you can use an int which is usually 4 bytes. A long long integer is usually 8 bytes. If the cpu was 32 bit, wouldn't that limit it to 32 bit numbers? How come I can use a long long integer if it doesn't support 64 bits? Can the alu add larger integers or something?

like image 407
user3452725 Avatar asked Apr 13 '14 02:04

user3452725


2 Answers

Most processors include a carry flag and an overflow flag to support operations on multi-word integers. The carry flag is used for unsigned math, and the overflow flag for signed math.

For example, on an x86 you could add two unsigned 64-bit numbers (which we'll assume are in EDX:EAX and EBX:ECX) something like this:

add eax, ecx  ; this does an add, ignoring the carry flag
adc edx, ebx  ; this adds the carry flag along with the numbers
; sum in edx:eax

It's possible to implement this sort of thing in higher level languages like C++ as well, but they do a lot less to support it, so the code typically ends up substantially slower than when it's written in assembly language.

Most operations are basically serial in nature. When you're doing addition at the binary level, you take two input bits and produce one result bit and one carry bit. The carry bit is then used as an input when adding the next least significant bit, and so on across the word (known as a "ripple adder", because the addition "ripples" across the word).

There are more sophisticated ways to do addition that can reduce that dependency between one bit and another when a particular addition doesn't produce a dependency, and most current hardware uses such things.

In the worst case, however, adding 1 to a number that's already the largest a given word size supports will result in generating a carry from every bit to the next, all the way across the word.

That means that (to at least some extent) the word width a CPU supports imposes a limit on the maximum clock speed at which it can run. If somebody wanted to badly enough, they could build a CPU that worked with, say, 1024-bit operands. If they did that, however, they'd have two choices: either run it at a lower clock speed, or else take multiple clocks to add a single pair of operands.

Also note that as you widen operands like that, you need more storage (e.g., larger cache) to store as many operands, more gates to carry out each individual operation, and so on.

So given identical technology, you could have a 64-bit processor that ran at 4 GHz and had, say, 4 megabytes of cache, or a 1024-bit processor that ran at about 250 MHz and had, perhaps, 2 megabytes of cache.

The latter would probably be a win if most of your work was on 1024-bit (or larger) operands. Most people don't do math on 1024-bit operands very often at all though. In fact, 64-bit numbers are large enough for most purposes. As such, supporting wider operands would probably turn out to be a net loss for most people most of the time.

like image 184
Jerry Coffin Avatar answered Oct 14 '22 18:10

Jerry Coffin


You might also consider that we used to cope with 16 or even 32 bit sized integers back in the days of 8-bit CPUs. There's nothing that restricts any particular alu from handling arbitrary size numbers other than memory space and ultimately I suppose the patience of the user.

Smalltalk for example has always provided arbitrary length integers since the original Dorados and Altos - that takes us back to 1970. Want the exact value of 963! - just do it. It'll take a while to format it to print though.

like image 40
timrowledge Avatar answered Oct 14 '22 17:10

timrowledge