Are shift operations O(1)
or O(n)
?
Does it make sense that computers generally require more operations to shift 31 places instead of shifting 1 place?
Or does it make sense the number of operations required for shifting is constant regardless of how many places we need to shift?
PS: wondering if hardware is an appropriate tag..
Bitshifting shifts the binary representation of each pixel to the left or to the right by a pre-defined number of positions. Shifting a binary number by one bit is equivalent to multiplying (when shifting to the left) or dividing (when shifting to the right) the number by 2.
1 in binary is 0001 , then bitshifting it by 0 won't do anything, which aligns with what you observed. So any number x << 0 is equivalent to x * 2^0 , which is x * 1 , which is just x .
Basically, Bitwise operators can be applied to the integer types: long, int, short, char and byte.
To calculate a left shift by 3 bits, follow these steps: Get your number in a binary format, e.g., 0000 0101 . Shift your bit string 3 positions to the left, discarding the digits falling out of scope, and filling up from the right with 0's: 0010 1000 . And that's it; you performed a shift of 3 bits to the left.
Some instruction sets are limited to one bit shift per instruction. And some instruction sets allow you to specify any number of bits to shift in one instruction, which usually takes one clock cycle on modern processors (modern being an intentionally vague word). See dan04's answer about a barrel shifter, a circuit that shifts more than one bit in one operation.
It all boils down to the logic algorithm. Each bit in the result is a logic function based on the input. For a single right shift, the algorithm would be something like:
But the logic equation could just as easily be:
The logic gates, being asynchronous, can do all of this in one clock cycle. Yet it is true the single shift allows for a faster clock cycle and less gates to settle, if all you are comparing is these two flavors of an instruction. Or the alternative is making it take longer to settle, so the instruction takes 2 or 3 clocks or whatever, and the logic counts to 3 then latches the result.
The MSP430, for example, only has single bit rotate right instructions (because you can perform a single bit shift or a rotate left with another instruction, which I will leave to the reader to figure out).
The ARM instruction set allows both immediate and register based multi-bit rotates, arithmetic shifts and logical shifts. I think there is only one actual rotate instruction and the other is an alias, because rotate left 1 is the same as a rotate right 32, you only need an one direction barrel shifter to implement a multi bit rotate.
SHL in the x86 allows more than one bit per instruction, but it used to take more than one clock.
and so on, you can easily examine any of the instruction sets out there.
The answer to your question is that it is not fixed. Sometimes it is one operation, one cycle, one instruction. Sometimes it is one instruction multiple clock cycles. Sometimes it is multiple instructions, multiple clock cycles.
The compilers often optimize for these sorts of things. Say you have a 16 bit register instruction set with a swap byte instruction and an AND instruction with immediate, but only a single bit shift. You may think shifting 8 bits would require 8 shift instruction cycles, but you could just swap bytes (one instruction) and then AND the lower half to zeros (which might take two instructions, or might be a variable word length instruction of two words, or it might encode into a single instruction) so it only takes 2 or 3 instruction/clock cycles instead of 8. For a shift of 9 bits, you can do the same thing and add a shift, making it 9 clocks vs 3 or 4. Also, on some architectures, it is faster to multiply by 256 than to shift by 8, etc, etc. Each instruction set has its own limitations and tricks.
It is not even the case that either most instruction sets provide multi bit or most limit to single bit. The processors that fall into the "computer" category, like X86, ARM, PowerPC, and MIPS, would lean toward one operation to shift. Expand to all processors but not necessarily "computers" commonly used today, and it shifts the other way, I would say more of them are single bit than multi bit, so multiple operations are needed to perform a multi-bit shift.
A barrel shifter allows the shift to be performed in O(log n)
passes — which may be done in the same clock cycle, making shifting an O(1)
operation.
As already noted, a barrel shifter can shift an operand an arbitrary distance in constant time. A barrel shifter, however, consumes a fair amount of space on a CPU die, so they're not included in all CPU designs.
Just for one fairly well known example, the Intel Pentium III included a barrel shifter -- but the Pentium IV did not. Code written for a Pentium III assuming a barrel shifter was present sometimes slowed down quite a bit on a Pentium IV. I had some encryption code (which includes lots of shifting and rotating) that ran about 4 times faster on a 1.2 GHz Pentium III than it did on a 2.8 GHz Pentium IV.
Bit shifting is O(1) on practically every current processor.
Take a look, for example, at the x86 "shrw" instruction. The first operand (in AT&T syntax) is the number of bits to shift. How a compiler implements shifting is dependent on the compiler, but it would be silly to put shifts in a loop when a processor can shift N bits in one go.
Addendum: Re: "Do they require more operations to shift left 31?" There are different kinds of shifts (if you're wondering why, consider what to do with the bits that are shifted off the register), but most processors can do a single-instruction shift of as many bits as the GPR can store. To do a 40-bit shift on a 32-bit register would require shifting across multiple registers (this is assuming a 64-bit number is stored across 2 32-bit registers), which on every processor I know of will require more instructions. It would still be O(1), just probably not 1 clock. As an interesting side-note, the Pentium IV processor is amazingly slow at bit shifts. This is ironic because Intel has historically recommended optimization of ^2 divides and multiplies by way of bit shifting. See: this PDF and Google for more info if interested.
For normal hardware, fixed size registers it's constant regardless of how many places you shift.
Also note, that the usage of the O notation is quite weird here, you would normally use it to denote the algorithmic complexity based on the number to shift not the number of places to shift..
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With