I'm using an OpenGL 3.2 context with GLSL 1.5 and for some reason integer attributes (of type int, uint, ivecX, or uvecX) are always being read as 0 in the vertex shader. I'm declaring them using:
in int name;
and I am binding the attributes using glVertexAttribIPointer (notice the I), and not glVertexAttribPointer (but that one doesn't work either). If I change them to be floats instead they work perfectly fine - the only code difference being the type in the vertex struct, the type in GLSL, and the IPointer function call instead of just Pointer. I'm not getting any errors or anything, they're just all 0. If I hard code integer values instead it works fine, and integer uniforms work fine. Also, built in integers like gl_VertexID work perfectly, just custom ones don't. I'm running an ATI Mobility Radeon HD 5870. I tried on another computer with a different GPU (unfortunately I'm not sure what GPU though, but it was different than mine) with the same results. Any ideas why this might be the case? Thanks.
EDIT: Actually it looks like they aren't 0, more likely they're random large uninitialized values... it's hard to tell since I can't find any ways to debug GLSL shaders. Anyway, some more info. Here is my vertex structure:
struct TileVertex {
float pos[2];
float uv[2];
float width;
float pad;
int animFrames;
int animFrameLength;
};
animFrames and animFrameLength are the two integer values I'm trying to send to the shader. My call to glVertexAttribIPointer for animFrames is the following:
glVertexAttribIPointer( attribute.location, attribute.typeSize, attribute.baseType, (GLsizei)stride, bufferOffset( bufOffset + attribOffset ) );
where:
attribute.location = 1 (as determined by OpenGL)
attribute.typeSize = 1 (since it's a single int, not a vector)
attribute.baseType = 5124, which is GL_INT
stride = 32, which is sizeof( TileVertex )
bufferOffset() converts to a void pointer relative to NULL
bufOffset = 0 (my vertices start at the beginning of the VBO), and
attribOffset = 24, which is the offset of animFrames in the TileVertex struct
EDIT: Thanks for the help so far guys. So I tried using transform feedback and things are making more sense now. If I set the int attrib's value to 1, in the shader it is:
1065353216 = 0x3F800000 = 1.0 in floating point
If I set it to 10, in the shader I get:
1092616192 = 0x41200000 = 10.0 in floating point
So it appears that the int attrib is being converted to float, then those bits are being interpreted as int in the shader, even though I'm specifying GL_INT and using IPointer instead of Pointer! As I understand it, IPointer is supposed to just leave the data in integer form and not convert it to a float.
EDIT:
Here are some more tests. For each test I am trying to pass the integer value 1 to an integer input in the shader:
glVertexAttribIPointer with GL_INT: shader values are 0x3F800000, which is 1.0 in floating point
seems to indicate that integer 1 is being converted to floating point 1.0, then being interpreted as an integer. This means that OpenGL either thinks that the source data is in floating point form (when it is actually in integer form), or it thinks that the shader inputs are floating point (when they are actually ints).
glVertexAttribIPointer with GL_FLOAT: shader values are valid but weird floating point values, such as 0.0, 1.0, 4.0, 36.0... what the hell!?
no idea what this means. The only value I am passing is integer 1, so I can't figure out why each values would be different, or why they would be valid floats! My logic trying this was that if OpenGL was converting the integers to floats, maybe telling it that they were already float would avoid that, but apparently not.
glVertexAttribPointer with GL_INT: same result as glVertexAttribIPointer with GL_INT
this is the expected result. OpenGL converts the ints to floats, then passes them to the shader. This is what is supposed to happen since I didn't use the I version.
glVertexAttribPointer with GL_FLOAT: integer values 1 (the correct result)
this works because OpenGL 1) thinks the source data is in floating point form and 2) thinks the shader inputs are also in floating point form (they are actually int and int), so therefore does not apply any conversion, leaving int as int (or float as float, as it thinks). This works but it seems very hacky and unreliable, since I don't think there's a guarantee that CPU float to GPU float won't require a conversion (don't some GPUs use 16 bit floats? Maybe that's just pre-OpenGL 3 but still) - it just doesn't on my GPU.
For one of the shaders (can't remember which), you need to use the varying
keyword. Or maybe the attribute
keyword. Later versions of GLSL use in
and out
instead.
I think you need:
attribute int name;
for data going to the vertex shader, and
varying int name;
for data going from vertex shader to fragment shader.
Also make sure to enable the shader attribute with glEnableVertexAttribArray
.
I had some trouble getting int
atributes working also, but what I did discover is that the type
field (GL_INT
or GL_FLOAT
passed to glVertexAttribPointer
matches the data passed to it, not the data type in the shader). So I ended up using glVertexAttribPointer
and GL_INT
which converted int
on my host to float
in my shader, which is fine for me because my attribute was position data and needed to be transformed by a floating-point vec2
anyway.
Probably you need glVertexAttribIPointerEXT
to match your shader int
attribute, and then also GL_INT
if the host is supplying data as an int
array.
I know this question is old, but maybe this information is still helpful:
GLint
is defined as having a size of 32 bits,
but your int
in C might as well be 64 bits.
This got me debugging for way too long.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With