I am writing my own engine using OpenTK (basically just OpenGL bindings for C#, gl* becomes GL.*) and I'm going to be storing a lot of vertex buffers with several thousand vertices in each. Therefore I need my own, custom vertex format, as a Vec3 with floats would simply take up too much space. (I'm talking about millions of vertices here)
What I want to do is to create my own vertex format with this layout:
Byte 0: Position X
Byte 1: Position Y
Byte 2: Position Z
Byte 3: Texture Coordinate X
Byte 4: Color R
Byte 5: Color G
Byte 6: Color B
Byte 7: Texture Coordinate Y
Here is the code in C# for the vertex:
public struct SmallBlockVertex
{
public byte PositionX;
public byte PositionY;
public byte PositionZ;
public byte TextureX;
public byte ColorR;
public byte ColorG;
public byte ColorB;
public byte TextureY;
}
A byte as position for each axis is plenty, as I only need 32^3 unique positions.
I have written my own vertex shader which takes two vec4's as inputs, on for each set of bytes. My vertex shader is this:
attribute vec4 pos_data;
attribute vec4 col_data;
uniform mat4 projection_mat;
uniform mat4 view_mat;
uniform mat4 world_mat;
void main()
{
vec4 position = pos_data * vec4(1.0, 1.0, 1.0, 0.0);
gl_Position = projection_mat * view_mat * world_mat * position;
}
To try and isolate the problem, I have made my vertex shader as simple as possible. The code for compiling shaders is tested with immediate mode drawing, and it works, so it can't be that.
Here is my function which generates, sets up and fills the vertex buffer with data and establishes a pointer to the attributes.
public void SetData<VertexType>(VertexType[] vertices, int vertexSize) where VertexType : struct
{
GL.GenVertexArrays(1, out ArrayID);
GL.BindVertexArray(ArrayID);
GL.GenBuffers(1, out ID);
GL.BindBuffer(BufferTarget.ArrayBuffer, ID);
GL.BufferData<VertexType>(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * vertexSize), vertices, BufferUsageHint.StaticDraw);
GL.VertexAttribPointer(Shaders.PositionDataID, 4, VertexAttribPointerType.UnsignedByte, false, 4, 0);
GL.VertexAttribPointer(Shaders.ColorDataID, 4, VertexAttribPointerType.UnsignedByte, false, 4, 4);
}
From what I understand, this is the correct procedure to: Generate a Vertex Array Object and bind it Generate a Vertex Buffer and bind it Fill the Vertex Buffer with data Set the attribute pointers
Shaders.*DataID is set with this code after compiling and using the shader.
PositionDataID = GL.GetAttribLocation(shaderProgram, "pos_data");
ColorDataID = GL.GetAttribLocation(shaderProgram, "col_data");
And this is my render function:
void Render()
{
GL.UseProgram(Shaders.ChunkShaderProgram);
Matrix4 view = Constants.Engine_Physics.Player.ViewMatrix;
GL.UniformMatrix4(Shaders.ViewMatrixID, false, ref view);
//GL.Enable(EnableCap.DepthTest);
//GL.Enable(EnableCap.CullFace);
GL.EnableClientState(ArrayCap.VertexArray);
{
Matrix4 world = Matrix4.CreateTranslation(offset.Position);
GL.UniformMatrix4(Shaders.WorldMatrixID, false, ref world);
GL.BindVertexArray(ArrayID);
GL.BindBuffer(OpenTK.Graphics.OpenGL.BufferTarget.ArrayBuffer, ID);
GL.DrawArrays(OpenTK.Graphics.OpenGL.BeginMode.Quads, 0, Count / 4);
}
//GL.Disable(EnableCap.DepthTest);
//GL.Disable(EnableCap.CullFace);
GL.DisableClientState(ArrayCap.VertexArray);
GL.Flush();
}
Can anyone be so kind as to give me some pointers (no pun intended)? Am I doing this in the wrong order or is there some functions I need to call?
I've searched all over the web, but can't find one good tutorial or guide explaining how to implement custom vertices. If you need any more information, please say so.
There is not much to making your own vertex format. It is all done in the glVertexAttribPointer
calls. First of all, you are using 4 as stride parameter, but your vertex structure is 8 bytes wide, so there are 8 bytes from the start of one vertex to the next, so the stride has to be 8 (in both calls, of course). The offsets are correct, but you should set the normalized flag to true for the colors, as you surely want them to be in the [0,1] range (I don't know if this should also be the case for the vertex positions).
Next, when using custom vertex attributes in shaders, you don't enable the deprecated fixed function arrays (the gl...ClienState
things). Instead you have to use
GL.EnableVertexAttribArray(Shaders.PositionDataID);
GL.EnableVertexAttribArray(Shaders.ColorDataID);
and the corresponding glDisableVertexAttribArray
calls.
And what does the count/4
mean in the glDrawArrays
call. Keep in mind that the last parameter specifies the number of vertices and not primitives (quads in your case). But maybe it's intended this way.
Besides these real errors, you should not use such coplicated vertex format that you have to decode it in the shader yourself. That's what the stride and offset parameters of glVertexAttribPointer
are for. For example redefine your vertex data a bit:
public struct SmallBlockVertex
{
public byte PositionX;
public byte PositionY;
public byte PositionZ;
public byte ColorR;
public byte ColorG;
public byte ColorB;
public byte TextureX;
public byte TextureY;
}
and then you can just use
GL.VertexAttribPointer(Shaders.PositionDataID, 3, VertexAttribPointerType.UnsignedByte, false, 8, 0);
GL.VertexAttribPointer(Shaders.ColorDataID, 3, VertexAttribPointerType.UnsignedByte, true, 8, 3);
GL.VertexAttribPointer(Shaders.TexCoordDataID, 2, VertexAttribPointerType.UnsignedByte, true, 8, 6);
And in the shader you have
attribute vec3 pos_data;
attribute vec3 col_data;
attribute vec2 tex_data;
and you don't have to extract the texture coordinate from the position and color yourself.
And you should really think about if your space requirements really demand using bytes for vertex positions as this extremely limits the precision of your position data. Maybe shorts or half precision floats would be a good compromise.
And also it should not be neccessary to call glBindBuffer
in the render method, as this is only needed for glVertexAttribPointer
and is saved in the VAO that gets activated by glBindVertexArray
. You should also usually not call glFlush
as this is done anyway by the OS when the buffers are swapped (assuming you use double buffering).
And last but not least, be sure your hardware also supports all the features you are using (like VBOs and VAOs).
EDIT: Actually the enabled flags of the arrays are also stored in the VAO, so that you can call
GL.EnableVertexAttribArray(Shaders.PositionDataID);
GL.EnableVertexAttribArray(Shaders.ColorDataID);
in the SetData
method (after creating and binding the VAO, of course) and they then get enabled when you bind the VAO by glBindVertexArray
in the render function. Oh, I just saw another error. When you bind the VAO in the render function, the enabled flags of the attribute arrays are overwritten by the state from the VAO and as you did not enable them after the VAO creation, they are still disabled. So you will have to do it like said, enable the arrays in the SetData
method. Actually in your case you might be lucky and the VAO is still bound when you enable the arrays in the render function (as you didn't call glBindVertexArray(0)
) but you shouldn't count on that.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With