No matter which C-compatible library I use, when I look at the header defined constants, they are always defined as hexadecimal values. Here, for instance, in GL/gl.h:
#define GL_POINTS 0x0000
#define GL_LINES 0x0001
#define GL_LINE_LOOP 0x0002
#define GL_LINE_STRIP 0x0003
#define GL_TRIANGLES 0x0004
#define GL_TRIANGLE_STRIP 0x0005
#define GL_TRIANGLE_FAN 0x0006
#define GL_QUADS 0x0007
#define GL_QUAD_STRIP 0x0008
#define GL_POLYGON 0x0009
Is there any particular reason for this convention, why not simply use decimal values instead?
There are a number of possible reasons:
1) Bit flags are much easier to express as hex, since each hex digit represents exactly 4 bits.
2) Even for values which aren't explicitly bit flags, there are often intentional bit patterns that are more evident when written as hex.
For instance, all the AlphaFunctions start with 0x02 and differ in only a single byte:
#define GL_NEVER 0x0200
#define GL_LESS 0x0201
#define GL_EQUAL 0x0202
#define GL_LEQUAL 0x0203
#define GL_GREATER 0x0204
#define GL_NOTEQUAL 0x0205
#define GL_GEQUAL 0x0206
#define GL_ALWAYS 0x0207
3) Hex values are allowed to have leading zeroes, so it is easier to line up the values. This can make reading (and proof-reading) easier. You might be surprised that leading zeroes are allowed in hex and octal literals but not decimal, but the C++ spec says quite explicitly
A decimal integer literal (base ten) begins with a digit other than 0 and consists of a sequence of decimal digits.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With