Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How long does it take for OpenGL to actually update the screen?

I have a simple OpenGL test app in C which draws different things in response to key input. (Mesa 8.0.4, tried with Mesa-EGL and with GLFW, Ubuntu 12.04LTS on a PC with NVIDIA GTX650). The draws are quite simple/fast (rotating triangle type of stuff). My test code does not limit the framerate deliberately in any way, it just looks like this:

while (true)
{
    draw();
    swap_buffers();
}

I have timed this very carefully, and I find that the time from one eglSwapBuffers() (or glfwSwapBuffers) call to the next is ~16.6 milliseconds. The time from after a call to eglSwapBuffers() to just before the next call is only a little bit less than that, even though what is drawn is very simple. The time that the swap buffers call takes is well under 1ms.

However, the time from the app changing what it's drawing in response to the key press to the change actually showing up on screen is >150ms (approx 8-9 frames worth). This is measured with a camera recording of the screen and keyboard at 60fps. (Note: It is true I do not have a way to measure how long it takes from key press to the app getting it. I am assuming it is <<150ms).

Therefore, the questions:

  1. Where are graphics buffered between a call to swap buffers and actually showing up on screen? Why the delay? It sure looks like the app is drawing many frames ahead of the screen at all times.

  2. What can an OpenGL application do to cause an immediate draw to screen? (ie: no buffering, just block until draw is complete; I don't need high throughput, I do need low latency)

  3. What can an application do to make the above immediate draw happen as fast as possible?

  4. How can an application know what is actually on screen right now? (Or, how long/how many frames the current buffering delay is?)

like image 481
Alex I Avatar asked Jan 25 '14 08:01

Alex I


2 Answers

  • Where are graphics buffered between a call to swap buffers and actually showing up on screen? Why the delay? It sure looks like the app is drawing many frames ahead of the screen at all times.

The command is queued, whatever drawn to the backbuffer, waits till next vsync if you have set swapInterval and at the next vsync, this buffer should be displayed.

  • What can an OpenGL application do to cause an immediate draw to screen? (ie: no buffering, just block until draw is complete; I don't need high throughput, I do need low latency)

Use of glFinish will ensure everything is drawn before this API returns, but no control over when it actually gets to the screen other than swapInterval setting.

  • What can an application do to make the above immediate draw happen as fast as possible? How can an application know what is actually on screen right now? (Or, how long/how many frames the current buffering delay is?)

Generally you can use sync (something like http://www.khronos.org/registry/egl/extensions/NV/EGL_NV_sync.txt) to find out this.

Are you sure the method of measuring latency is correct ? What if the key input actually has significant delay in your PC ? Have you measured latency from the event having been received in your code, to the point after swapbuffers ?

like image 119
Prabindh Avatar answered Nov 15 '22 21:11

Prabindh


You must understand that the GPU has specially dedicated memory available (on board). At the most basic level this memory is used to hold the encoded pixels you see on your screen (it is also used for graphics hardware acceleration and other stuff, but that is unimportant now). Because it takes time loading a frame from your main RAM to your GPU RAM you can get a flickering effect: for a brief moment you see the background instead of what is supposed to be displayed. Although this copying happens extremely fast, it is noticeable to the human eye and quite annoying.

To counter this, we use a technique called double buffering. Basically double buffering works by having an additional frame buffer in your GPU RAM (this can be one or many, depending on graphics library you are working with and the GPU, but two is enough to work) and using a pointer to indicate which frame should be displayed. Thus while the first frame is being displayed, you are already creating the next in your code using some draw() function on an image structure in main RAM, this image is then copied to your GPU RAM (while still displaying the previous frame) and then when calling eglSwapBuffers() the pointer switches to your back buffer (I guessed it from your question, I'm not familiar with OpenGL, but this is quite universal). You can imagine this pointer switch does not require very much time. I hope you see now that directly writing an image to the screen actually causes much more delay (and annoying flickering).

Also ~16.6 milliseconds does not sound like that much. I think most time is lost creating/setting the required data structures and not really in the drawing computations itself (you could test this by just drawing the background).

At last I like to add that I/O is usually pretty slow (slowest part of most programs) and 150ms is not that long at all (still twice as fast as a blink of an eye).

like image 40
Jori Avatar answered Nov 15 '22 20:11

Jori