I have discovered that SwapBuffers in OpenGL will busy-wait as long as the graphics card isn't done with its rendering or if it's waiting on V-Sync.
This is a problem for me because I don't want to waste 100% of a CPU core while just waiting for the card to be finished. I'm not writing a game, so I can not use the CPU cycles for anything more productive, I just want to yield them to some other process in the operating system.
I've found callback-functions such as glutTimerFunc and glutIdleFunc that could work for me, but I don't want to use glut. Still, glut must in some way use the normal gl functions to do this, right?
Is there any function such as "glReadyToSwap" or similar? In that case I could check that every millisecond or so and determine if I should wait a while longer or do the swap. I could also imagine perhaps skip SwapBuffers and write my own similar function that doesn't busy-wait if someone could point me in the right direction.
The SwapBuffers function exchanges the front and back buffers if the current pixel format for the window referenced by the specified device context includes a back buffer.
This sort of action is common in OpenGL programs; however, swapping buffers is a window-related function, not a rendering function; therefore, you cannot do it directly with OpenGL. To swap buffers, use glXSwapBuffers() or, when using the widget, the convenience function GLwDrawingAreaSwapBuffers().
Modern GL implementations will typically not block on the SwapBuffers , even if VSync is on.
SwapBuffers is not busy waiting, it just blocks your thread in the driver context, which makes Windows calculating the CPU usage wrongly: Windows calculates the CPU usage by determining how much CPU time the idle process gets + how much time programs don't spend in driver context. SwapBuffers will block in driver context and your program obviously takes away that CPU time from the idle process. But your CPU is doing literally nothing in the time, the scheduler happily waiting to pass the time to other processes. The idle process OTOH does nothing else than immediately yield its time to the rest of the system, so the scheduler jumps right back into your process, which blocks in the driver what Windows counts as "is clogging CPU". If you'd measure the actual power consumption or heat output, for a simple OpenGL program this will stay rather low.
This irritating behaviour is actually an OpenGL FAQ!
Just create additional threads for parallel data processing. Keep OpenGL in one thread, the data processing in the other. If you want to get down the reported CPU usage, adding a Sleep(0) or Sleep(1) after SwapBuffers will do the trick. The Sleep(1) will make your process spend blocking a little time in user context, so the idle process gets more time, which will even out the numbers. If you don't want to sleep, you may do the following:
const float time_margin = ... // some margin float display_refresh_period; // something like 1./60. or so. void render(){ float rendertime_start = get_time(); render_scene(); glFinish(); float rendertime_finish = get_time(); float time_to_finish = rendertime_finish - rendertime_start; float time_rest = fmod(render_finish - time_margin, display_refresh_period); sleep(time_rest); SwapBuffers(); }
In my programs I use this kind of timing but for another reason: I let SwapBuffers block without any helper Sleeps, however I give some other worker threads about that time to do stuff on the GPU through shared context (like updating textures) and I have the garbage collector running. It's not really neccesary to exactly time it, but the worker threads being finished just before SwapBuffers returns allows one to start rendering the next frame almost immediately since most mutexes are already unlocked then.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With