Right now, I'm modelling some sort of little OpenGL library to fool around with graphic programming etc. Therefore, I'm using classes to wrap around specific OpenGL function calls like texture creation, shader creation and so on, so far, so good.
My Problem:
All OpenGL calls must be done by the thread which owns the created OpenGL Context (at least under Windows, every other thread will do nothing and create an OpenGL error). So, in order to get an OpenGL context, I firstly create an instance of a window class (just another wrapper around the Win API calls) and finally create an OpenGL context for that window. That sounded quiet logical to me. (If there's already a flaw in my design that makes you scream, let me know...)
If I want to create a texture, or any other object that needs OpenGL calls for creation, I basically do this (the called constructor of an OpenGL object, example):
opengl_object()
{
//do necessary stuff for object initialisation
//pass object to the OpenGL thread for final contruction
//wait until object is constructed by the OpenGL thread
}
So, in words, I create an object like any other object using
opengl_object obj;
Which then, in its contructor, puts itself into a queue of OpenGL objects to be created by the OpenGL context thread. The OpenGL context thread then calls a virtual function which is implemented in all OpenGL objects and contains the necessary OpenGL calls to finally create the object.
I really thought, this way of handling that problem, would be nice. However, right now, I think I'm awfully wrong.
The case is, even though the above way works perfectly fine so far, I'm having troubles as soon as the class hierarchy goes deeper. For example (which is not perfectly, but it shows my problem):
Let's say, I have a class called sprite, representing a Sprite, obviously. It has its own create function for the OpenGL thread in which the vertices and texture coordinates are loaded into the graphic cards memory and so on. That's no problem so far. Let's further say, I want to have 2 ways of rendering sprites. One Instanced and one through another way. So, I would end up with 2 classes, sprite_instanced and sprite_not_instanced. Both are derived from the sprite class, as they both are sprite which are only rendered differently. However, sprite_instanced and sprite_not_instanced need further OpenGL calls in their create function.
My Solution so far (and I feel really awful about it!)
I have some kind of understanding how object generation in c++ works and how it affects virtual functions. So I decided to use the virtual create function of the class sprite only to load the vertex data and so on into the the graphics memory. The virtual create method of sprite_instanced will then do the preparation to render that sprite instanced. So, if I want write
sprite_instanced s;
Firstly, the sprite constructor is called and after some initialisation, the constructing thread passes the object to the OpenGL thread. At this point, the passed object is merely a normal sprite, so sprite::create will be called and the OpenGL thread will create a normal sprite. After that, the constructing thread will call the constructor of sprite_instanced, again do some initialisation and pass the object to the OpenGL thread. This time however, it's a sprite_instanced and therefore sprite_instanced::create will be called.
So, if I'm right with the above assumption, everything happens exactly as it should, in my case at least. I spend the last hour reading about calling virtual functions from constructors and how the v-table is build etc. I've ran some test to check my assumption, but that might be compiler-specific so I don't rely on them 100%. In addition, it just feels awful and like a terrible hack.
Another Solution
Another possibility would be implementing factory method in the OpenGL thread class to take care of that. So I can do all the OpenGL calls inside the constructor of those objects. However, in that case, I would need a lot of functions (or one template-based approach) and it feels like a possible loss of potential rendering time when the OpenGL thread has more to do than it needs to...
My Question
Is it ok to handle it the way I described it above? Or should I rather throw that stuff away and do something else?
You were already given some good advice. So I'll just spice it up a bit:
One important thing to understand about OpenGL is, that it is a state machine, which doesn't need some elaborate "initialization". You just use it, and that's about it. Buffer Objects (Textures, Vertex Buffer Objects, Pixel Buffer Objects) may make it look different, and most tutorials and real world applications indeed fill Buffer Objects at application start.
However it is perfectly fine to create them during regular program execution. In my 3D engine I use the free CPU time during the double buffer swap for asynchronous uploads into Buffer Objects (for(b in buffers){glMapBuffer(b.target, GL_WRITE_ONLY);} start_buffer_filling_thread(); SwapBuffers(); wait_for_buffer_filling_thread(); for(b in buffers){glUnmapBuffer(b.target);}
).
It's also important to understand that for simple things like sprites should not given its own VBO for each sprite. One normally groups large groups of sprites in a single VBO. You don't have to draw them all together, since you can offset into the VBO and make partial drawing calls. But this common pattern of OpenGL (geometrical objects sharing a buffer object) completely goes against that principle of your classes. So you's need some buffer object manager, that hands out slices of address space to consumers.
Using a class hierachy with OpenGL in itself is not a bad idea, but then it should be some levels higher than OpenGL. If you just map OpenGL 1:1 to classes you gain nothing but complexity and bloat. If I call OpenGL functions directly or by class, I'll still have to do all the grunt work. So a texture class should not just map the concept of a texture object, but it should also take care of interacting with Pixel Buffer Objects (if used).
If you actually want to wrap OpenGL in classes I strongly recommend not using virtual functions but static (means on the compilation unit level) inline classes, so that they become syntactic sugar the compiler will not bloat up too much.
The question is simplified by the fact a single context is assumed to be current on a single thread; actually there can be multiple OpenGL contexts, also on different threads (and while we're at, we consider context name spaces sharing).
First all, I think you should separate the OpenGL calls from the object constructor. Doing this allow you to setup an object without carrying about OpenGL context currency; successively the object could be enqueued for creation in the main rendering thread.
An example. Suppose we have 2 queues: one which holds Texture objects for loading texture data from filesystem, one which hold Texture objects for uploading texture data on GPU memory (after having loaded data, of course).
Thread 1: The texture loader
{
for (;;) {
while (textureLoadQueue.Size() > 0) {
Texture obj = textureLoadQueue.Dequeue();
obj.Load();
textureUploadQueue.Enqueue(obj);
}
}
}
Thread 2: The texture uploader code section, essentially the main rendering thread
{
while (textureUploadQueue.Size() > 0) {
Texture obj = textureUploadQueue.Dequeue();
obj.Upload(ctx);
}
}
The Texture object constructor should looks like:
Texture::Texture(const char *path)
{
mImagePath = path;
textureLoadQueue.Enqueue(this);
}
This is only an example. Of course each object has different requirements, but this solution is the most scalable.
My solution is essentially described by the interface IRenderObject (the documentation is far different from the current implementation, since I'm refactoring alot at this moment and the development is at a very alpha level). This solution is applied to C# languages, which introduce additional complexity due the garbage collection management, but the concept are perfectly adaptable to C++ language.
Essentially, the interface IRenderObject define a base OpenGL object:
The creation/deletion operations are very intuitive. The take a RenderContext abstracting the current context; using this object, it is possible to execute checks that can be useful to find bug in object creation/deletion:
Here is an example on the Delete method. Here the code works, but it doesn't work as expected:
RenderContext ctx1 = new RenderContext(), ctx2 = new RenderContext();
Texture tex1, tex2;
ctx1.MakeCurrent(true);
tex1 = new Texture2D();
tex1.Load("example.bmp");
tex1.Create(ctx1); // In this case, we have texture object name = 1
ctx2.MakeCurrent(true);
tex2 = new Texture2D();
tex2.Load("example.bmp");
tex2.Create(ctx2); // In this case, we have texture object name = 1, the same has before since the two contexts are not sharing the object name space
// Somewhere in the code
ctx1.MakeCurrent(true);
tex2.Delete(ctx1); // Works, but it actually delete the texture represented by tex1!!!
The asynchronous release operation aim to delete the object, but not having a current context ( infact the method doesn't take any RenderContext parameter). It could happen that the object is disposed in a separate thread, which doesn't have a current context; but also, I cannot rely on the gargage colllector (C++ doesn't have one), since it is executed in a thread with I have no control. Furthermore, it is desiderable to implement the IDisposable interface, so application code can control the OpenGL object lifetime.
The OpenGL GarbageCollector, is executed on the thread having the right context current.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With