Recently I had a question on the interview - I was asked to compare bitwise operations in performance terms.
Like, give a brief description of the performance of different bit operations.
I guess this question could is pretty general and pretty machine-specific, but I also think there should be some general rules about this, which you have to mention (and I didn't :).
So - what would you answer?
I probably should also say, that it may be a good idea to compare their performance in C (or C++, whatever), because I assume that these languages give more space for the compiler to perform bit-related optimizations.
Thank you.
Okay, the full problem context.
The interview had several sections and some of them were really a piece of cake, some were a nightmare. Bit-related section was kind of tough and including the questions like:
Floating point number specifications, float
, double
Fast float
->int
conversions (and even faster one if you know the range)
Those weren't very tough, but as the last question in this bit-related section I was asked to enumerate bit operations I know and compare their performance.
I answered something not really descriptive like "it's architecture, compiler, ... specific, it won't actually matter, bitwise is already pretty low-level", but I guess this answer sucks.
In the C programming language, operations can be performed on a bit level using bitwise operators. Bitwise operations are contrasted by byte-level operations which characterize the bitwise operators' logical counterparts, the AND, OR, NOT operators.
What is a Bitwise Operator? The Bitwise Operator in C is a type of operator that operates on bit arrays, bit strings, and tweaking binary values with individual bits at the bit level. For handling electronics and IoT-related operations, programmers use bitwise operators. It can operate faster at a bit level.
I would imagine they mean comparing bit operations to arithmetic equivalents.
For example, "is a = (a>>1)
faster than a = (a / 2)
?"
In many cases, simple operations like this will (for various reasons including software and hardware optimisations, pipelining, caching, etc) effectively take 1 CPU cycle, so on modern processors you are unlikely to see a difference - but if they run many instructions in parallel through multiple ALUs then you may still benefit from better utilisation of the CPU's parallel pathways if you mix arithmetic and bitwise operations. If you write in a higher level language then the compiler is very likely to optimise your code to use whatever is the better form, so you'll see the same performance regardless of which way you write your code!
However, there are cases where bitwise operations can be significantly faster than simple arithmetic/logic - for example, you can apply some operations to all the bytes in a 32-bit or 64-bit value with a few bitwise operations (that effectively process all the bytes simultaneously), which would otherwise take a lot of looping and comparing logic to do incrementally. See here for some great examples of what can be achieved. In these cases there is often a significant performance benefit to be gained. (although the exact gain can depend a lot on the CPU you're targeting)
(Alternatively, they may have just meant "is a shift operation faster than an XOR", in which case with modern processors in most cases the answer is probably "no" - most bitwise operations are quick. But that would be a pretty pointless question to ask...)
This is a very strange question to be asked. First of all because the language should be irrelevant - as long as you're writing in a compiled language, any bitwise operation should be compiled down to a single assembly instruction even in the most simplistic assembly languages.
Secondly, once that single assembly instruction hits the processor, it should be evaluated in a single cycle - bitwise stuff is just that simple. It's possible that a space-starved processor would implement the shift operation in a couple cycles (they require a lot more circuitry than the others to do quickly), but that seems pretty unlikely on any modern machine, so even shifts should be single-cycle instructions.
Here's what Wikipedia has to say on the matter:
A bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. On most older microprocessors, bitwise operations are slightly faster than addition and subtraction operations and usually significantly faster than multiplication and division operations. On modern architectures, this is not the case bitwise operations are generally the same speed as addition (though still faster than multiplication).
Hope this helps!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With