I'm programming a calculator. When the window is maximized, the CPU usage is about 12%, but when it is minimized, the CPU usage rises to about 50%. Why is this happening and how can I prevent this? Here is the piece of code that I think is causing the problem.
LRESULT CALLBACK WndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch(uMsg)
{
case WM_ACTIVATE:
if(!HIWORD(wParam))
active = true;
else
active = false;
return 0;
case WM_SYSCOMMAND:
switch(wParam)
{
case SC_SCREENSAVE:
case SC_MONITORPOWER:
return 0;
}
break;
case WM_CLOSE:
PostQuitMessage(0);
return 0;
case WM_KEYDOWN:
if( (wParam >= VK_LEFT && wParam <= VK_DOWN) || wParam == VK_CONTROL)
myCalc.handleInput(wParam, true);
return 0;
case WM_CHAR:
myCalc.handleInput(wParam);
return 0;
case WM_SIZE:
ReSizeGLScene(LOWORD(lParam), HIWORD(lParam)); //LOWORD = Width; HIWORD = Height
return 0;
}
return DefWindowProc(hWnd, uMsg, wParam, lParam);
}
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd)
{
MSG msg;
if(!CreateGLWindow(WINDOW_CAPTION, WINDOW_WIDTH, WINDOW_HEIGHT, WINDOW_BPP))
{
return 0;
}
while(!done) //Main loop
{
if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
{
if(msg.message == WM_QUIT)
done = true;
else
{
TranslateMessage(&msg); //Translate the message
DispatchMessage(&msg); //Dispatch the message
}
}
else
{
//Start the time handler
myTimeHandler.Start();
//Draw the GL Scene
if(active)
{
DrawGLScene(); //Draw the scene
SwapBuffers(hDC); //Swap buffer (double buffering)
}
//Regulate the fps
myTimeHandler.RegulateFps();
}
}
//Shutdown
KillGLWindow();
return(msg.wParam);
}
My guess is that your main loop runs without any delays if active
is false. The thread infinitely spins through that loop and keeps one of your two processors cores busy (that's why you see 50% CPU load).
If active
is true, the swap operation waits for the next vsync and delays your loop until your next screen refresh happens, resulting in a lower CPU load. (The time a thread spends waiting inside a windows function waiting for an event to happen does not count to its CPU load.)
To solve that problem, you could switch to a GetMessage-based message loop for the time that you do not want to render anything.
The less area your OpenGL window covers, the quicker the scene is drawn (the key term is fillrate) and thus the event loop is iterated with a higher frequency. I see you have some function RegulateFps
– to me this sounds like something that busy loops until a certain time has been consumed in the renderer. I.e. you're literally wasting CPU time to gain… uhhh, why do you want to keep the framerate low in the first place? Get rid of that.
And of course if you minimize it, you set active = false
so not doing GL stuff at all, but still wasting time in the busy loop.
Try switching on V synchronization in the driver options, and use duouble buffering, then wglSwapBuffers
will block until the vertical blank. And if active==false
don't PeekMessage
but GetMessage
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With