Sometimes I see Integer constants defined in hexadecimal, instead of decimal numbers. This is a small part I took from a GL10 class:
public static final int GL_STACK_UNDERFLOW = 0x0504; public static final int GL_OUT_OF_MEMORY = 0x0505; public static final int GL_EXP = 0x0800; public static final int GL_EXP2 = 0x0801; public static final int GL_FOG_DENSITY = 0x0B62; public static final int GL_FOG_START = 0x0B63; public static final int GL_FOG_END = 0x0B64; public static final int GL_FOG_MODE = 0x0B65;
It's obviously simpler to define 2914
instead of 0x0B62
, so is there maybe some performance gain? I acutallly don't think so, since then it should be the compiler's job to change it.
The Hexadecimal, or Hex, numbering system is commonly used in computer and digital systems to reduce large strings of binary numbers into a sets of four digits for us to easily understand.
A hexadecimal constant is an alternative way to represent numeric constants. A hexadecimal constant takes one of the following forms: Z'd [d...]' Is a hexadecimal (base 16) digit (0 through 9, or an uppercase or lowercase letter in the range of A to F).
The biggest advantage of the hexadecimal system is the compactness of its numbers, since fewer digits are required to represent a number than in binary or decimal notation. This is thanks to its base of sixteen. And it's relatively easy to convert binary numbers into hexadecimal numbers and vice versa.
It is likely for organizational and visual cleanliness. Base 16 has a much simpler relationship to binary than base 10, because in base 16 each digit corresponds to exactly four bits.
Notice how in the above, the constants are grouped with many digits in common. If they were represented in decimal, bits in common would be less clear. If they instead had decimal digits in common, the bit patterns would not have the same degree of similarity.
Also, in many situations it is desired to be able to bitwise-OR constants together to create a combination of flags. If the value of each constant is constrained to only have a subset of the bits non-zero, then this can be done in a way that can be re-separated. Using hex constants makes it clear which bits are non-zero in each value.
There are two other reasonable possibilities: octal, or base 8 simply encodes 3 bits per digit. And then there is binary coded decimal, in which each digit requires four bits, but digit values above 9 are prohibited - that would be disadvantageous as it cannot represent all of the possibilities which binary can.
"It's obviously simpler to define 2914 instead of 0x0B62"
I don't know about that specific case, but quite often that is not true.
Out of the two questions:
B will be answered more correctly faster by a lot of developmers. (This goes for similar questions as well)
0x0B62 (it is 4 hex digits long so it reprensents a 16-bit number)
->
0000101101100010
(I dare you to do the same with 2914.)
That is one reason for using the hex value, another is that the source of the value might use hex (the standard of a specification for example).
Sometimes I just find it silly, as in:
public static final int NUMBER_OF_TIMES_TO_ASK_FOR_CONFIRMATION = ...;
Would almost always be silly to write in hex, I'm sure there are some cases where it wouldn't.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With