Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Passing uint attribute to GLSL

Tags:

opengl

glsl

I'm trying to pass a bunch of consecutive unsigned ints as attribute to my GLSL shader.

So far I came up with

s_number = glGetAttribLocation(shader, "number");

numberData = new GLuint[dotAmount];
for (GLuint i = 0; i < dotAmount; i++) {
    numberData[i] = i;
}

glGenBuffers(1, &vertBuf);
glBindBuffer(GL_ARRAY_BUFFER, vertBuf);

glBufferData(
        GL_ARRAY_BUFFER,
        sizeof(dotAmount),
        numberData,
        GL_STATIC_DRAW
);

The rendering function is

glUseProgram(shader);

[..]

glEnableVertexAttribArray(s_number);
glBindBuffer(GL_ARRAY_BUFFER, vertBuf);

glVertexAttribPointer(
        s_number,
        1,
        GL_UNSIGNED_INT,
        GL_FALSE,
        0,
        BUFFER_OFFSET(0)
);

glDrawArrays(GL_POINTS, 0, dotAmount);

I try to use the number in the vertex shader like this:

attribute uint number;

(The name 'vertBuf' is actually a bit misleading since it's not vertex data I want to pass) I'm using OpenGL 3 and shader versions 1.3.

What I am trying to achieve is, I want the shaders to be executed dotAmount times. The positioning is done mathematically within the shader. But all I get is a blank screen...

I am quite sure that the problem does not lie in the shaders. I want to draw points, and if I put gl_Position = vec4(0.0, 0.0, 0.0, 0.0); in the vertex shader, I assume it should draw something.

like image 261
arsenbonbon Avatar asked Sep 20 '13 14:09

arsenbonbon


1 Answers

You are using the wrong API call to specify your vertex attribute pointer.

glVertexAttribPointer (...) is for floating-point vertex attributes. It will happily take the value of an integer data type, but ultimately this value will be converted to floating-point. This is why it has a parameter to control floating-point normalization. When normalization is enabled, an integer value you pass is adjusted using the type's range to make it fit within the normalized floating-point range: [-1.0, 1.0] (signed) or [0.0, 1.0] (unsigned); when disabled, an integer is effectively treated as if it were cast to a GLfloat.

In your case, you want neither behavior described above. In your vertex shader, your vertex attribute is not a floating-point type to begin with, so having OpenGL convert your vertex array data to floating-point will produce meaningless results.

What you need to do is use glVertexAttribIPointer (...). Notice how this function lacks the boolean for normalization? It will pass your integer vertex data completely unaltered to your vertex shader, exactly what you want.


In summary:

  1. glVertexAttribPointer (...) is good for supplying data to floating-point vertex attributes (i.e. vec<N>, mat4, float), and will do data-type conversion for you.

  2. glVertexAttribIPointer (...) is specifically designed for integer attributes (i.e. ivec<N>, {u}int).

like image 133
Andon M. Coleman Avatar answered Sep 28 '22 03:09

Andon M. Coleman