I am going trough the source code for the HashMap
class in Java, and found that variables are initialized as
static final int MAXIMUM_CAPACITY = 1 << 30;
Why is it not just
static final int MAXIMUM_CAPACITY = 1073741824;
which means the same thing. Is there any performance reasons or is it just a fancy thing?
If you're working with numbers of bits, 1 << 30
makes it more explicit than 1073741824
.
There should be no performance difference whatsoever, as any reasonable compiler should evaluate the expression at compilation time. Everything about it is constant, so there's no reason not to.
It also makes it easier to spot a typo; write 1 << 3
or 1 << 20
and the error if you know that it's supposed to be on the order of a billion is obvious to anyone familiar with binary; write 10737741824
or 1073714824
rather than 1073741824
and the error is nowhere near as obvious.
Basically, what it boils down to is a matter of preference and in some situations (for example bitmasks) ease of reading.
There is no performance impact here, because the compiler does all the calculation. The only reason it is done is readability: it is easy to see that 1 << 30
is two to the power of thirty; not everybody can conclude the same by looking at the decimal representation of the same number 1073741824
.
Note that a hexadecimal literal would be fine as well:
0x40000000 // same as 1 << 30
In general, whenever you are dealing with bit patterns, it is much more convenient to show them as a combination of shifts and ORs, as hexadecimal, or as octal literals, because these literals are closely aligned with binary representation. It is much easier to convert them to binary in your head, because you need to deal with one digit at a time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With