I'm using example here it works under
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
but it become a transparent window when I set it to
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
but I need that example work with some drawing under GLUT_DOUBLE
mode.
So what's the difference between GLUT_DOUBLE
and GLUT_SINGLE
?
Single buffering results in flickering (where scan-out sends part of frame which is in the process of being rendered to the monitor), double buffering without waiting for vertical sync results in tearing (where scan-out switches from one complete frame to the next in the middle of a frame rather than between frames).
There are two buffers in the system. One buffer is used by the driver or controller to store data while waiting for it to be taken by higher level of the hierarchy. Other buffer is used to store data from the lower level module. Double buffering is also known as buffer swapping.
Well, the main advantages of double-buffering are: The user does not see every pixel modification (so if the same pixel gets modified 5 times, the user will only see the last one). This helps preventing 'flickering' (or screen flashes).
When drawing to a single buffered context (GLUT_SINGLE), there is only one framebuffer that is used to draw and display the content. This means, that you draw more or less directly to the screen. In addition, things draw last in a frame are shown for a shorter time period then objects at the beginning.
When using GL_SINGLE
, you can picture your code drawing directly to the display.
When using GL_DOUBLE
, you can picture having two buffers. One of them is always visible, the other one is not. You always render to the buffer that is not currently visible. When you're done rendering the frame, you swap the two buffers, making the one you just rendered visible. The one that was previously visible is now invisible, and you use it for rendering the next frame. So the role of the two buffers is reversed each frame.
In reality, the underlying implementation works somewhat differently on most modern systems. For example, some platforms use triple buffering to prevent blocking when a buffer swap is requested. But that doesn't normally concern you. The key is that it behaves as if you had two buffers.
The main difference, aside from specifying the different flag in the argument for glutInitDisplayMode()
, is the call you make at the end of the display function. This is the function registered with glutDisplayFunc()
, which is DrawCube()
in the code you linked.
In single buffer mode, you call this at the end:
glFlush();
In double buffer mode, you call:
glutSwapBuffers();
So all you should need to do is replace the glFlush()
at the end of DrawCube()
with glutSwapBuffers()
when using GLUT_DOUBLE
.
When drawing to a single buffered context (GLUT_SINGLE), there is only one framebuffer that is used to draw and display the content. This means, that you draw more or less directly to the screen. In addition, things draw last in a frame are shown for a shorter time period then objects at the beginning.
In a double buffered scenario (GLUT_DOUBLE), there exist two framebuffer. One is used for drawing, the other one for display. At the end of each frame, these buffers are swapped. Doing so, the view is only changed at once when a frame is finished and all objects are visible for the same time.
That beeing said: Are you sure that a transparent window is caused by GL_DOUBLE and not by using GL_RGBA instead of GL_RGB?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With