Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do different variations of glVertexAttribPointer exist?

Tags:

opengl

There are

glVertexAttribPointer()
glVertexAttribIPointer()
glVertexAttribLPointer()

As far as I know, glVertexAttribPointer can be used instead of the other two.

If so, why do the I and L variations exist?

like image 538
HolyBlackCat Avatar asked Jan 18 '15 21:01

HolyBlackCat


3 Answers

Have a test and you will understand the difference.

Suppose you are doing transform feedback with the following vertex shader:

#version 450 core
layout(location = 0) in int input;
layout(xfb_offset = 0) out float output;

void main()
{
    output = sqrt(input);
}

And this is your "vertex data":

GLint data[] = { 1, 2, 3, 4, 5 };

Then if you setup vertex attributes like this:

glVertexAttribPointer(0, 1, GL_INT, GL_FALSE, 0, nullptr);

You will get wrong and strange results.


If you change this line in the vertex shader

output = sqrt(input);

to

output = sqrt(intBitsToFloat(input));

OR

change this line in C++ code:

glVertexAttribPointer(0, 1, GL_FLOAT, GL_FALSE, 0, nullptr);
                            ^^^^^^^^
                            does not match the real input type
                            but stops glVertexAttribPointer() converting them

It will work. But this is not a natural way.


Now glVertexAttribIPointer() comes to help:

--- glVertexAttribPointer(0, 1, GL_INT, GL_FALSE, 0, nullptr);
+++ glVertexAttribIPointer(0, 1, GL_INT, 0, nullptr);

Then you will get correct results.

(I struggled for this for a whole afternoon until I found glVertexAttribIPointer().)

like image 62
kbridge4096 Avatar answered Nov 16 '22 01:11

kbridge4096


I read about this in OpenGL Insights

When using glVertexAttribPointer() everything gets cast to a float. While glVertexAttribIPointer() can only expose vertex arrays that store integers and glVertexAttribLPointer() is only for doubles.

As confirmed by a quote on this OpenGL.org page:

For glVertexAttribPointer, if normalized​ is set to GL_TRUE​, it indicates that values stored in an integer format are to be mapped to the range [-1,1] (for signed values) or [0,1] (for unsigned values) when they are accessed and converted to floating point. Otherwise, values will be converted to floats directly without normalization.

For glVertexAttribIPointer, only the integer types GL_BYTE​, GL_UNSIGNED_BYTE​, GL_SHORT​, GL_UNSIGNED_SHORT​, GL_INT​, GL_UNSIGNED_INT​ are accepted. Values are always left as integer values.

glVertexAttribLPointer specifies state for a generic vertex attribute array associated with a shader attribute variable declared with 64-bit double precision components. type​ must be GL_DOUBLE​. index​, size​, and stride​ behave as described for glVertexAttribPointer and glVertexAttribIPointer.

like image 25
Borgleader Avatar answered Nov 16 '22 02:11

Borgleader


No, they can't be used instead of each other.

Traditionally, all vertex attributes of the GL are floating-point. The fact that you can input integer data doesn't change that - the data is converted to floating-point on the fly. The normalized parameter controls how the conversion is done, if it is enabled, the range of the input type is mapped to the normalized [0,1] (for unsigned types, also called UNORM ing the GL) or [-1,1] (for signed types, also called SNORM), if it is disabled, the value is directly converted to the nearest floating-point value of the input integer.

Since this was the original API, it had to be extended when different attribute data types (integers and doubles) were introduced. Also note that the attribute pointers are independent of the shaders, so the target value cannot be determined by the currently bound shader (if any), as this might be used with different shaders later on. So, the L variants id for double/dvec attributes, while the I variant is for int/uint/ivec/uvec attributes.

like image 36
derhass Avatar answered Nov 16 '22 02:11

derhass