I am rendering array of points with a custom vertex shader. Shaders looks like:
void mainVP()
in varying int in_vertex_id : VERTEXID
{
foo(in_vertex_id);
}
So the only thing I need - is vertex id. But I need a lot of vertices and I don't want to store fake VBO for them (it takes around 16mb of memory).
I tried to run my code without binding any VBO. It works. So my rendering looks like:
size_t num_vertices = ...
glDrawArrays(GL_POINTS, 0, num_vertices);
But can I be sure that rendering without binding VBO is safe?
But can I be sure that rendering without binding VBO is safe?
You can't.
The OpenGL specification's core profile (3.2 and above) clearly states that it should be allowed, that you can render with all attributes disabled. The OpenGL specification's compatibility profile or any versions before 3.2 just as clearly state that you cannot do this.
Of course, that doesn't matter anyway. NVIDIA drivers allow you to do this on any OpenGL version and profile. ATI's drivers don't allow you to do it on any OpenGL version or profile. They're both driver bugs, just in different ways.
You'll just have to accept that you need a dummy vertex attribute. However:
But I need a lot of vertices and I don't want to store fake VBO for them (it takes around 16mb of memory).
A dummy attribute would take up 4 bytes (a single float, or a 4-vector of normalized bytes. Remember: you don't care about the data). So you could fit 4 million of them in 16MB.
Alternatively, you could use instanced rendering via glDrawArraysInstanced. There, you just render one vertex, but with num_vertices
instances. Your shader will have to use the instance ID, of course.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With