Spoiler: I'm fairly confident that the answer is NO
, but that's only after a day of very frustrated debugging. Now I would like to know if that is indeed the case (and if so, how I might have known), or if I'm just doing something completely wrong.
Here's the situation. I'm using OpenGL ES 2.0 to render some meshes that I load from various files (.obj, .md2, etc.). For the sake of performance and User Experience, I delegate the actual loading of these meshes and their associated textures to a background thread using GCD.
Per Apple's instructions, on each background thread, I create and set a new EAGLContext with the same shareGroup
as the main rendering context. This allows OpenGL objects, like texture and buffer objects, that were created on the background thread to be immediately used by the context on the main thread.
This has been working out splendidly. Now, I recently learned about Vertex Array Objects as a way to cache the OpenGL state associated with rendering the contents of certain buffers. It looks nice, and reduces the boilerplate state checking and setting code required to render each mesh. On top of it all, Apple also recommends using them in their Best Practices for Working with Vertex Data guide.
But I was having serious issues getting the VAOs to work for me at all. Like I do with all loading, I would load the mesh from a file into memory on a background thread, and then generate all associated OpenGL objects. Without fail, the first time I tried to call glDrawElements()
using a VAO, the app crashes with EXC_BAD_ACCESS
. Without the VAO, it renders fine.
Debugging EXC_BAD_ACCESS
is a pain, especially when NSZombies won't help (which they obviously won't), but after some time of analyzing captured OpenGL frames, I realized that, while the creation of the VAO on the background thread went fine (no GL_ERROR
, and a non-zero id), when the time came to bind to the VAO on the main thread, I would get a GL_INVALID_OPERATION
, which the docs state will happen when attempting to bind to a non-existent VAO. And sure enough, when looking at all the objects in the current context at the time of rendering, there isn't a single VAO to be seen, but all of the VBOs that were generated with the VAO AT THE SAME TIME are present. If I load the VAO on the main thread it works fine. Very frustrating.
I distilled the loading code to a more atomic form:
- (void)generate {
glGenVertexArraysOES(1, &_vao);
glBindVertexArrayOES(_vao);
_vbos = malloc(sizeof(GLuint) * 4);
glGenBuffers(4, vbos);
}
When the above is executed on a background thread, with a valid EAGLContext
with the same shareGroup
as the main context, the main context will have 4 VBOs, but no VAO. If I execute it on the main thread, with the main context, it will have 4 VBOs, and the VAO. This leads me to the conclusion that there is some weird exception to the object-sharing nature of EAGLContext
s when dealing with VAOs. If that were actually the case, I would have really expected the Apple docs to note that somewhere. It's very inconvenient to have to discover little tidbits like that by hand. Is this the case, or am I missing something?
According to this, OpenGL-ES explicitly disallows sharing of VAO objects:
Should vertex array objects be sharable across multiple OpenGL ES contexts?
RESOLVED: No. The OpenGL ES working group took a straw-poll and agreed that compatibility with OpenGL and ease of implementation were more important than creating the first non-shared named object in OpenGL ES.
As you noted, VBOs are still shareable, so you just have to create a VAO for each context that binds the shared VBO.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With