Lets say I have a function where I want the user to be able to select the appropriate texture in a type safe manner. So instead of using a GLenum of GL_TEXTUREX I define a method as follows.
void activate_enable_bind(uint32_t texture_num) {
const uint32_t max_textures = GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - GL_TEXTURE0;
const uint32_t actual_texture = (GL_TEXTURE0 + texture_num);
if (texture_num > max_textures) {
throw std::runtime_error("ERROR: texture::activate_enable_bind()");
}
glActiveTexture(actual_texture);
glEnable(target_type);
glBindTexture(target_type, texture_id_);
}
Is this guaranteed to work under all implementations based on the opengl specification, or are implementers allowed to have
`GL_TEXTURE0 - GL_TEXTURE(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS -1)`
defined in a non-contiguous manner?
I was modifying my code aswell here in what I have:
void activate_enable_bind(uint32_t texture_num = 0) {
GLint max_textures = 0;
glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &max_textures);
if (static_cast<GLint>(texture_num) > max_textures - 1) {
throw std::runtime_error("ERROR: texture::activate_enable_bind()");
}
const uint32_t actual_texture = (GL_TEXTURE0 + texture_num);
glActiveTexture(actual_texture);
glEnable(target_type);
glBindTexture(target_type, texture_id_);
}
I think GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS
is not on its own a useful value, but is something that you pass to glGet
to retrieve the actual value. To account for that, you'd retrieve it like this:
GLint max_combined_texture_image_units;
glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &max_combined_texture_image_units);
// and then maybe check for errors
As for adding to GL_TEXTURE0
, that is safe; §3.8 of the OpenGL 3.2 Core specification says this:
ActiveTexture generates the error
INVALID_ENUM
if an invalid texture is specified. texture is a symbolic constant of the formTEXTURE
i, indicating that texture unit i is to be modified. The constants obeyTEXTURE
i= TEXTURE0+
i (i is in the range 0 to k − 1, where k is the value ofMAX_COMBINED_TEXTURE_IMAGE_UNITS
).
Your corrected code, (GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS
is an enum):
void activate_enable_bind(uint32_t texture_num) {
int value;
glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS_ARB,&value);
if (texture_num+1 > value) {
throw std::runtime_error("ERROR: texture::activate_enable_bind()");
}
const uint32_t actual_texture = (GL_TEXTURE0 + texture_num);
glActiveTexture(actual_texture);
glEnable(target_type);
glBindTexture(target_type, texture_id_);
}
EDIT: Also read @icktoofay answer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With