Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is double buffering needed any more

As today's cards seem to keep a list of render commands and flush only on a call to glFlush or glFinish, is double buffering really needed any more? An OpenGL game I am developing on Linux (ATI Mobility radeon card) with SDL/OpenGL actually flickers less when SDL_GL_swapbuffers() is replaced by glFinish() and with SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER,0) in the init code. Is this a particular case of my card or are such things likely on all cards?

EDIT: I've discovered that the cause for this is KWin. It appears that as datenwolf said, compositing without sync was the cause. When I switched off KWin compositing, the game works fine without ANY source code patches

like image 444
Sudarshan S Avatar asked Jul 01 '11 05:07

Sudarshan S


People also ask

Why do we need double buffering?

Double buffering is a term used to describe a device with two buffers. The usage of multiple buffers increases the overall throughput of a device and helps prevents bottlenecks. For example, with graphics, double buffering can show one image or frame while a separate frame is being buffered to be shown next.

Where is double buffering used?

Double Buffers Two buffers are commonly used to speed up program execution. Data are processed in one buffer while data are written into or read out of the other.

What problem does double buffering solve?

In computer graphics, double buffering is a technique for drawing graphics that shows no (or less) stutter, tearing, and other artifacts. It is difficult for a program to draw a display so that pixels do not change more than once.


1 Answers

Double buffering and glFinish are two very different things.

glFinish blocks the program, until all drawing operations are completed.

Double buffering is used to hide the rendering process from the user. Without double buffering, each and every single drawing operation would become visible immediately, assuming that the display refresh frequency is infinitely high. In practice you will get some display artifacts, like parts of the scene visible in one state, the rest not visible or in some other state, the picture could be incomplete, etc. Double buffering avoids this by first rendering into a back buffer, and only after the rendering has been finished swapping this back with the front buffer, that gets sent to the display device.

Now today compositing window management becomes prevalent: Windows has Aero, MacOS X Quartz Extreme and on Linux at least Unity and the GNOME3 shell use compositing if available. The point is: Compositing technically creates doublebuffering: Windows draw to offscreen buffers and of these the final screen is composited. So if you're running on a machine with compositing, then double buffering is kind of redundant if performed in your program, and all it'd take was some kind of synchronization mechanism, to tell the compositor when the next frame is ready. MacOS X has this. X11 still lacks a proper synchronization scheme, see this post on the maillist: http://lists.freedesktop.org/archives/xorg/2004-May/000607.html

TL;DR: Double buffering and glFinish are different things, and you need double buffering (of some sort) to make things look good.

like image 81
datenwolf Avatar answered Nov 16 '22 00:11

datenwolf