Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Precise control over texture bits in GLSL

I am trying to implement an octree traversal scheme using OpenGL and GLSL, and would like to keep the data in textures. While there is a big selection of formats to use for the texture data (floats and integers of different sizes) I have some trouble figuring out if there is a way to have more precise control over the bits and thus achieving greater efficiency and compact storage. This might be a general problem, not only applying to OpenGL and GLSL.

As a simple toy example, let's say that I have a texel containing a 16 bit integer. I want to encode two booleans of 1 bit each, one 10 bit integer value and then a 4 bit integer value into this texel. Is there a technique to encode this when creating the texture, and then decode these components when sampling the texture using a GLSL shader?

Edit: Looks like I am in fact looking for bit manipulation techniques. Since they seem to be supported, I should be fine after some more researching.

like image 723
Victor Sand Avatar asked Feb 19 '13 15:02

Victor Sand


1 Answers

Integer and bit-manipulations inside GLSL shaders are supported since OpenGL 3 (thus present on DX10 class hardware, if that tells you more). So you can just do this bit mainulation on your own inside the shader.

But working with integers is one thing, getting them out of the texture is another. The standard OpenGL texture formats (that you may be used to) are either storing floats directly (like GL_R16F) or normalized fixed point values (like GL_R16, effectively integers for the uninitiated ;)), but reading from them (using texture, texelFetch or whatever) will net you float values in the shader, from which you cannot that easily or reliably deduce the original bit-pattern of the internally stored integer.

So what you really need to use is an integer texture, which require OpenGL 3, too (or maybe the GL_EXT_texture_integer extension, but hardware supporting that will likely have GL3 anyway). So for your texture you need to use an actual integer internal format, like e.g. GL_R16UI (for a 1-component 16-bit unsigned integer) in constrast to the usual fixed point formats (like e.g. GL_R16 for a normalized [0,1]-color with 16 bits precision).

And then in the shader you need to use an integer sampler type, like e.g. usampler2D for an unsigned integer 2D texture (and likewise isampler... for the signed variants) to actually get an unsigned integer from your texture or texelFetch calls:

CPU:

glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, ..., GL_R, GL_UNSIGNED_SHORT, data);

GPU:

uniform usampler2D tex;

...
uint value = texture(tex, ...).r;
bool b1 = (value&0x8000) == 0x8000, 
     b2 = (value&0x4000) == 0x4000;
uint i1 = (value>>4) & 0x3FF, 
     i2 = value & 0xF;
like image 167
Christian Rau Avatar answered Oct 16 '22 05:10

Christian Rau