In the docs I found an enum case defined as:
kCGBitmapByteOrderDefault = (0 << 12)
As far as I know, this means bit shift zero 12 times... which is still zero. What am I missing?
Simply use a shortcut to "range": you can construct a range and "~=" means "contains". (
The << operator shifts the left-hand value left by the (right-hand value) bits. Your example does nothing! 1 shifted 0 bits to the left is still 1. However, 1 << 1 is 2, 1 << 2 is 4, etc.
1 in binary is 0001 , then bitshifting it by 0 won't do anything, which aligns with what you observed. So any number x << 0 is equivalent to x * 2^0 , which is x * 1 , which is just x .
The bitwise XOR operator, or “exclusive OR operator” ( ^ ), compares the bits of two numbers.
If you look at all of the relevant values, you see:
kCGBitmapByteOrderMask = kCGImageByteOrderMask,
kCGBitmapByteOrderDefault = (0 << 12),
kCGBitmapByteOrder16Little = kCGImageByteOrder16Little,
kCGBitmapByteOrder32Little = kCGImageByteOrder32Little,
kCGBitmapByteOrder16Big = kCGImageByteOrder16Big,
kCGBitmapByteOrder32Big = kCGImageByteOrder32Big
And kCGBitmapByteOrderMask
is 0x7000
(i.e. the three bits after you shift over 12 bits; 0b0111000000000000
).
So 0 << 12
is just a very explicit way of saying "if the bits, after you shift over 12 bits, are 0". Yes, 0 << 12
is actually 0
, but it's making it explicit that kCGBitmapByteOrderDefault
is not when the whole CGBitmapInfo
value is zero (because there could be other meaningful, non-zero, data in those first 12 bits), but only when the bits after the first 12 are zero.
So, in short, the << 12
is not technically necessary, but makes the intent more explicit.
Per Apple Doc for CGBitmapInfo
:
The byte order constants specify the byte ordering of pixel formats.
...If the code is not written correctly, it’s possible to misread the data which leads to colors or alpha that appear wrong.
The various constants for kCGBitmapByteOrder
mostly map to similarly named constants in CGImageByteOrder
, which does not have a "Default."
Those values are found in detail in the docs for CGImageByteOrderInfo
The one you asked about is the default, which as you noted bit-shifts 0, which is still 0, but as Rob notes the preceding/following bits still matter.
What you were missing is the other options:
kCGBitmapByteOrder16Little = (1 << 12)
16-bit, little endian format.
kCGBitmapByteOrder32Little = (2 << 12)
32-bit, little endian format.
kCGBitmapByteOrder16Big = (3 << 12)
16-bit, big endian format.
kCGBitmapByteOrder32Big = (4 << 12)
32-bit, big endian format.
These use different values depending on 16-bit vs 32-bit image, and whether you care about the least or most-significant digit first.
The "Default" (0 << 12)
follows the same format/process of shifting by 12. And, as Rob pointed out, the the first 12 bits and any following also have meaning. Using these other options has a different effect in how they're interpreted vs using the "Default"
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With