Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Turning bits into values

How does a computer know that (int x, y) x << y means shift over y bits? I don't mean the shift part. I mean the y part. Does the computer shift x by one and subtract one from y until y == 0? If not how does the computer figures out the value of y?

If say y = 10, then the binary representation is 0b1010. The computer can't simply take each bit of 1010 and use it, can it?

I am trying to to this for bit sizes larger than 8. Since the values are not simply stored as arrays of standard integers, the container doesn't represent a value, so overloading the operators << and >> is a bit harder. However, counting down from a 100 bit number to 0 is somewhat inefficient, so I'm trying to find a way to make the computer understand a bit array faster.

like image 806
calccrypto Avatar asked Nov 18 '25 14:11

calccrypto


1 Answers

First off, in C the effects of performing a shift larger than the bit width of the type in question are undefined - in other words, if you have 32 bit ints, x << 33 will result in unreliable results (it need not be zero!).

The exact implementation depends on your hardware. Some embedded processors do indeed perform a loop of single bit shifts; on more powerful CPU architectures such as x86, however, there is a machine instruction to do an arbitrary shift in a single operation, usually using something like a barrel shifter in hardware. The C limitation on the value of the shift oprand comes from different instruction sets handling out-of-range shift values differently; x86 will truncate the shift argument (ie, doing modulo 32 if you're using a 32-bit value), but some other instruction set architectures may have different behavior.

In general, unless you're developing for an embedded processor, you don't need to worry about a single bit shift being expensive.

like image 66
bdonlan Avatar answered Nov 20 '25 04:11

bdonlan