Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can I assume that a bitwise AND operation is O(1)? If so why?

My processor can only do arithmetic operations on 8-bit or 16-bit unsigned integers.

1) This means the word size for this processor is 16 bits, correct?

Operations on words are O(1).

The reason for this has to do with the actual circuit-level implementation of how the processor works, correct?

If I were to add two words, and the result were a number with more than 16-bits, could I state the following?

1) the processor can add the numbers but would only report the 16 least significant digits.

2) to also report more than 16 bits, the processor must have software that permits these operations with large numbers (numbers that don't fit in a word).

Finally,

Let's say I have a word w which is a 16-bit digit and I want the eight least significant digits. I can do w & 0xFF. What is the time complexity of this operation? Is this O(1) because of circuit level implementation of the processor as well?

like image 909
evianpring Avatar asked Apr 25 '16 01:04

evianpring


4 Answers

First about the add. Most CPU have what is called a carry flag in some processor status register that let you detect the addition and subtraction overflowing the bit size of the register. So find out the size of the registers on your specific CPU to determine the data bus bandwidth then find out how you can check the status register flags. Most CPU will have a SUB & ADD with and without carry for that purpose.

Next, about time complexity: You can't assume to use Big O notation for this. You need to find out the cycles it takes for the CPU to carry the operation in absolute time (Frequency * cycles) then you need to take into considerations other things like memory access vs L1 and L2 cache access to figure out the total time an operation will take for that CPU.

Finally, accessing memory from assembly code (as you seem to imply) lets you be much more efficient than with higher order languages like Python. CPU will include instructions that can adjust their memory addressing to fit the size of what you are looking for. C-like languages will also carry such ability in the language, but Python won't. JavaScript doesn't even have integers but I digress...

If your goal is to understand low-level programming, something that will always benefit you to understand the machine better, specially around pointers and debugging, I would encourage you to take a video class on Arduino. You may even enjoy it and start a new hobby.

like image 168
John Difool Avatar answered Oct 08 '22 14:10

John Difool


You apparently have some confusion about what O(...) means.

  1. the big-O notation requires a variable; for example sorting using compare and swap an array of n elements is known to be at least O(n*log(n)) and you have n that is the variable: as n increases the sorting time will also increase even faster. When saying that x & 0xFF is O(1) what is the variable you're talking about?

  2. big-O is about abstract algorithms where n can grow to infinity. If n and the computer are limited by an upper bound then any algorithm either doesn't terminate or is bounded by a constant (it doesn't make sense to discuss about the limit of something as n increases toward infinity if n cannot increase past a specific limit).

When talking about low-level hardware operations all operations are O(1). Some are faster and require just one "step" (like clearing a register), some require more steps (like an integer division). Even the division however will take at most n steps where n is a small integer.

When discussing about the performance of different algorithms in a concrete CPU it doesn't make sense to use big-O notation, what you can do is count the number of machine cycles required to complete with an explicit formula, possibly dependent on the input size n, but where n cannot grow to infinity.

This is what Knuth does in TAOCP

PS: unfortunately CPUs today are so complex that cycle counting doesn't work any more in the real world. They can for example split instructions into micro-instruction rescheduled to run in parallel, they support speculative execution with backtracking, branch prediction and other hard to analyze techniques. On top of all there is the caching issue that is of extreme importance today and different but compatible models can have vastly different approaches. The only way to really know how long it takes to execute a piece of code with modern CPUs is just to run and measure it over real data on the specific CPU model and hardware.

like image 45
6502 Avatar answered Oct 08 '22 14:10

6502


Short Answer:

Yes, a single bitwise AND would be considered O(1).

A Little More Detail:

Even if you looked at the number of operations on each bit it is still O(1). The actual number of bit operations may vary by the variable type, e.g. 8 bits vs. 16 bits vs. 32 bits vs. 64 bits (even 128 bits or more). The key is no matter what the underlying machine is using, it will still do a constant number of operations to perform it. So even as computers develop over time, a bitwise AND will still be O(1).

Another Example to Help Add Clarification

The following block of code is O(1):

print('Hello World');
print('Hello World');
print('Hello World');

Although we print hello world 3 times, every time we run it, it will take a constant amount of time to run and operate and doesn't take longer if someone feeds a large data set into the program. It'll simply just print 3 things, no matter the input.

In the case of the bitwise AND, we perform a specified number of sub-operations that are always the same number. e.g. 8, 16, 32, etc. for the one operation, but its always the same or constant.

In your example, it sounds like you are trying to show that you have some operations that would not require all of the bits to perform as well. Even if these smaller operations only considered 4 bits of say 8. Your code would always just do a constant number of operations when it hits that code. Its like printing 4 hello world statements instead of 8 hello world statements. Either way, 4 or 8 prints, its still constant.

This is why a single bitwise AND operation is O(1).

like image 2
James Oravec Avatar answered Oct 08 '22 15:10

James Oravec


In order:

  1. No. If your processor can do 8- or 16-bit operations, it could be either an 8- or 16- bit processor. In fact, it's more likely to be 8 bits, since most processors try to handle double-size operations.

  2. Yes, O(1). But not because it's in hardware, rather because it's implemented O(1) in hardware. Also, keep in mind that all O(x) are actually "times a constant." Thus if something is O(16), that's really O(1) times a constant 16.

Finally, if you have a 16-bit word and you want the low bits, and your processor really does support 8 bits operations, you can probably access the low bits with a MOV instruction. Something like:

mov16 ax, (memory)
mov8  (othermemory), al

If that isn't available, and you have to do an AND, then yes, the AND is going to be O(1) because it's almost certainly in hardware that way. Even if not, it's probably O(8) at worst, and that's really an alias for O(1).

like image 2
aghast Avatar answered Oct 08 '22 14:10

aghast