I was studying in depth about pointers as I don't think I have good knowledge about pointers and came across the following line on Wikipedia:
When dealing with arrays, the critical lookup operation typically involves a stage called address calculation which involves constructing a pointer to the desired data element in the array. If the data elements in the array have lengths that are divisible by powers of two, this arithmetic is usually much more efficient.
Why is this so?
The above line is written under the heading "Uses"
Multiply by 2n is done by shifting left. Modern processors can do shift in a single cycle (and in x86, for small shifts up to 8 or 16, built into the address calculation itself). A regular multiply operation takes 4-10 clock cycles on AMD64 machines, and most likely similar on modern Intel processors. There are also restrictions to how "close together" two consecutive multiply operations can be done.
Of course, if the size of the array is quite large, it may be more efficient to use multiply instruction and pack the data in more tightly (not using padding to expand the data to a power of 2 size), because of cache efficiency
Of course, modern compilers are clever, so if you need to multiply by X by 12, the compiler will generate (X << 3) + (X << 2)
, for example, which is faster than a single multiply operation.
The address calculation of i'th element involves base + size_of_element * i
.
When the size of element is a power of 2, say size_of_element = 2^m, then this can be achieved with base + (i << m)
.
The shifting is much more efficient compared to the multiplication involved in the earlier calculation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With