Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenGL - How is GLenum a unsigned 32 bit Integer?

To begin there are 8 types of Buffer Objects in OpenGL:

  • GL_ARRAY_BUFFER​
  • GL_ELEMENT_ARRAY_BUFFER​
  • GL_COPY_READ_BUFFER
  • ...

They are enums, or more specifically GLenum's. Where GLenum is a unsigned 32 bit integer that has values up to ~ 4,743,222,432 so to say.

Most of the uses of buffer objects involve binding them to a certain target like this: e.g.

glBindBuffer (GL_ARRAY_BUFFER, Buffers [size]);

[void glBindBuffer (GLenum target, GLuint buffer)] documentation

My question is - is that if its an enum its only value must be 0,1,2,3,4..7 respectively so why go all the way and make it a 32 bit integer if it has only values up to 7? Pardon my knowledge of CS and OpenGL, it just seems unethical.

like image 685
Jebathon Avatar asked Dec 29 '13 20:12

Jebathon


People also ask

What is the C type of OpenGL?

OpenGL Type C Type Bitdepth Description Common Enum GLushort 16 Unsigned binary integer GL_UNSIGNED_SHORT GLint 32 Signed, 2's complement binary integer GL_INT GLuint 32 Unsigned binary integer GL_UNSIGNED_INT GLfixed 32 Signed, 2's complement 16.16 integer GL_FIXED 17 more rows ...

Is GL enum 32 bit or 32 bit?

GLenum is a 32-bit type, but in core and modern extended GL only 16- of them are usable. This is why you often see enumerant re-use when an extension is promoted from EXT to ARB and then promoted to core.

What is the maximum number of bits in a Glenum?

Nevertheless, OpenGL still officially only uses the lower 16-bits of the GLenum space for allocating constant values. GLenum is a 32-bit type, but in core and modern extended GL only 16- of them are usable.

Can the bitdepth of an OpenGL file be omitted?

The bitdepth can also be omitted as well, but only with unsigned normalized formats. Doing so gives OpenGL the freedom to pick a bitdepth. It is generally best to select one for yourself though.


1 Answers

Enums aren't used just for the buffers - but everywhere a symbolic constant is needed. Currently, several thousand enum values are assigned (look into your GL.h and the latest glext.h. Note that vendors get allocated their official enum ranges so they can implement vendor-specific extensions wihtout interfering with others - so a 32Bit enum space is not a bad idea. Furthermore, on modern CPU architechtures, using less than 32Bit won't be any more efficient, so this is not a problem performance-wise.

UPDATE: As Andon M. Coleman pointed out, currently only 16Bit enumerant ranges are beeing allocated. It might be useful to link at the OpenGL Enumerant Allocation Policies, which also has the following remark:

Historically, enumerant values for some single-vendor extensions were allocated in blocks of 1000, beginning with the block [102000,102999] and progressing upward. Values in this range cannot be represented as 16-bit unsigned integers. This imposes a significant and unnecessary performance penalty on some implementations. Such blocks that have already been allocated to vendors will remain allocated unless and until the vendor voluntarily releases the entire block, but no further blocks in this range will be allocated.

Most of these seem to have been removed in favor of 16 Bit values, but 32 Bit values have been in use. In the current glext.h, one still can find some (obsolete) enumerants above 0xffff, like

#ifndef GL_PGI_misc_hints
#define GL_PGI_misc_hints 1
#define GL_PREFER_DOUBLEBUFFER_HINT_PGI   0x1A1F8
#define GL_CONSERVE_MEMORY_HINT_PGI       0x1A1FD
#define GL_RECLAIM_MEMORY_HINT_PGI        0x1A1FE
...
like image 107
derhass Avatar answered Sep 23 '22 01:09

derhass