Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

GLSL Instancing - Max number of inputs for vertex data?

Tags:

opengl

glsl

I am trying to implement instancing in my OpenGL program. I got it to work, and then decided to make my GLSL code more efficient by sending the Model-View-Projection multiplication matrix as input to the GLSL program, so that the CPU computes it for each instance, opposed to the GPU. Here is my vertex shader code (most of it is irrelevant to my question):

#version 330 core

// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal_modelspace;
layout(location = 6) in mat4 models;
layout(location = 10) in mat4 modelsV;
layout(location = 14) in mat4 modelsVP;

// Output data ; will be interpolated for each fragment.
out vec3 newColor;

out vec3 Position_worldspace;

out vec3 Normal_cameraspace;
out vec3 EyeDirection_cameraspace;

// Values that stay constant for the whole mesh.
uniform mat4 MVP;
uniform mat4 MV;
uniform mat4 P;
uniform mat4 V;
uniform mat4 M;
uniform int num_lights;
uniform vec3 Lights[256];

void main(){

// Output position of the vertex, in clip space : MVP * position
gl_Position =  P * modelsV * vec4(vertexPosition_modelspace,1);

// Position of the vertex, in worldspace : M * position
Position_worldspace = (models * vec4(vertexPosition_modelspace,1)).xyz;

// Vector that goes from the vertex to the camera, in camera space.
// In camera space, the camera is at the origin (0,0,0).
vec3 vertexPosition_cameraspace = ( modelsV * vec4(vertexPosition_modelspace,1)).xyz;
EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace;    

// Normal of the the vertex, in camera space
Normal_cameraspace = ( modelsV * vec4(vertexNormal_modelspace,0)).xyz;

// UV of the vertex. No special space for this one.
newColor = vertexColor;

}

The above code works, but only because I'm not using the last input modelsVP to calculate gl_position. If I do use it (instead of computing P*modelsV), the instances won't be drawn, and I get this error:

Linking program
Compiling shader : GLSL/meshColor.vertexshader
Compiling shader : GLSL/meshColor.fragmentshader
Linking program
Vertex info
0(10) : error C5102: input semantic attribute "ATTR" has too big of a numeric index (16)
0(10) : error C5102: input semantic attribute "ATTR" has too big of a numeric index (16)
0(10) : error C5041: cannot locate suitable resource to bind variable "modelsVP". Possibly large array.

I'm sure I'm linking it correctly in my OpenGL code, because if I swap the input location modelsVP with modelsV so that it is 10 instead of 14, I am able to use it, but not modelsV. Is there a maximum number of inputs you can have for your vertex shader? I really can't think of any other idea of why else I would get this error...

I'll include more of my OpenGL code that is relevant here, but I'm pretty sure that it's correct (it's not all in the same class or method):

// Buffer data for VBO. The numbers must match the layout in the GLSL code.
#define position 0 
#define uv 1
#define color 2
#define normal 3 
#define tangent 4
#define bitangent 5
#define model 6      // 4x4 matrices take 4 positions
#define modelV 10
#define modelVP 14
#define num_buffers 18

GLuint VBO[num_buffers];
glGenBuffers(num_buffers, VBO);

for( int i=0; i<ModelMatrices.size(); i++ )
{
mvp.push_back( projection * view * ModelMatrices.at(i) );
mv.push_back( view * ModelMatrices.at(i) );
}

glBindBuffer(GL_ARRAY_BUFFER, VBO[model]);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4) * ModelMatrices.size(), &ModelMatrices[0], GL_DYNAMIC_DRAW);
for (unsigned int i = 0; i < 4 ; i++) {

glEnableVertexAttribArray(model + i);
glVertexAttribPointer(model + i, 4, GL_FLOAT, GL_FALSE, sizeof(glm::mat4), 
(const GLvoid*)(sizeof(GLfloat) * i * 4));
glVertexAttribDivisor(model + i, 1);
}

glBindBuffer(GL_ARRAY_BUFFER, VBO[modelV]);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4) * mv.size(), &mv[0], GL_DYNAMIC_DRAW);
for (unsigned int i = 0; i < 4 ; i++) {

glEnableVertexAttribArray(modelV + i);
glVertexAttribPointer(modelV + i, 4, GL_FLOAT, GL_FALSE, sizeof(glm::mat4), 
(const GLvoid*)(sizeof(GLfloat) * i * 4));
glVertexAttribDivisor(modelV + i, 1);
}

glBindBuffer(GL_ARRAY_BUFFER, VBO[modelVP]);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4) * mvp.size(), &mvp[0], GL_DYNAMIC_DRAW);
for (unsigned int i = 0; i < 4 ; i++) {

glEnableVertexAttribArray(modelVP + i);
glVertexAttribPointer(modelVP + i, 4, GL_FLOAT, GL_FALSE, sizeof(glm::mat4), (const GLvoid*)(sizeof(GLfloat) * i * 4));
glVertexAttribDivisor(modelVP + i, 1);
}
like image 727
Drake Avatar asked Aug 31 '13 01:08

Drake


People also ask

How many times is the vertex shader executed?

The vertex shader will be executed roughly once for every vertex in the stream. A vertex shader is (usually) invariant with its input. That is, within a single Drawing Command, two vertex shader invocations that get the exact same input attributes will return binary identical results.

What is a vertex GLSL?

Vertex shaders manipulate coordinates in a 3D space and are called once per vertex. The purpose of the vertex shader is to set up the gl_Position variable — this is a special, global, and built-in GLSL variable. gl_Position is used to store the position of the current vertex.

How are vertex coordinates passed to a vertex shader?

So how does the Vertex Shader receives data? It receives data by making use of the Vertex-Fetch stage, in-qualifier variables and glVertexAttrib(). PS.

How does a vertex shader pass data to a fragment shader?

All shader stages can pass data between them by using input and output variables. If in the vertex shader we create an output variable, we will be able to read it on the fragment shader as an input variable.


2 Answers

OpenGL mandates implementations offer a minimum of 16 4-component vertex attributes. Therefore an index of 16 is not guaranteed to be supported by all implementations; see GL_MAX_VERTEX_ATTRIBS for more details.

Your mat4 vertex attributes count as 4 4-component attributes, so an index of 14 is out of range on implementations that only support 16 4-component vertex attributes.

like image 142
Andon M. Coleman Avatar answered Nov 15 '22 11:11

Andon M. Coleman


You are using too many vertex attributes. Here's how to reduce the number of attributes without changing anything much about your code (and any functional changes are improvements). The following assumes that models is the "model-to-world" matrix, modelsV is the "model-to-camera" matrix, and that modelsVP is the "model-to-projection" matrix:

#version 330 core

// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal_modelspace;
layout(location = 6) in mat4 modelsV;

// Output data ; will be interpolated for each fragment.
out vec3 newColor;

//The fragment shader should work in *camera* space, not world space.
out vec4 Position_cameraspace;

out vec3 Normal_cameraspace;
//out vec3 EyeDirection_cameraspace; Can be computed from Position_cameraspace in the FS.

// Values that stay constant for the whole mesh.
uniform mat4 P;

void main()
{
  Position_cameraspace = modelsV * vec4(vertexPosition_modelspace, 1.0);

  gl_Position = P * Position_cameraspace;

  Normal_cameraspace = ( modelsV * vec4(vertexNormal_modelspace,0)).xyz;

  newColor = vertexColor;
}

See? Isn't that much simpler? Fewer uniforms in the vertex shader, fewer outputs to the fragment shader, fewer math computations, and fewer vertex attributes.

All you need to do is change your fragment shader to use the camera-space position, rather than the world-space position. Which should be a reasonably easy change.

like image 23
Nicol Bolas Avatar answered Nov 15 '22 11:11

Nicol Bolas