In short: Does an un-delayed while loop consume significant processing power, compared to a similar loop which is slowed down by a delay?
In not-so-short:
I have run into this question more often. I am writing the core part of a program (either microcontroller unit or computer application) and it consists of a semi-infinite while loop to stay alive and look for events.
I will take this example: I have a small application that uses an SDL window and the console. In a while loop I would like to listen to events for this SDL window, but I would also like to break this loop according to the command line input by means of a global variable. Possible solution (pseudo-code):
// Global
bool running = true;
// ...
while (running)
{
if (getEvent() == quit)
{
running = false;
}
}
shutdown();
The core while loop will quit from the listened event or something external. However, this loop is run continuously, maybe even a 1000 times per second. That's a little over-kill, I don't need that response time. Therefore I often add a delaying statement:
while (running)
{
if (getEvent() == quit)
{
running = false;
}
delay(50); // Wait 50 milliseconds
}
This limits the refresh rate to 20 times per second, which is plenty.
So. Is there a real difference between the two? Is it significant? Would it be more significant on the microcontroller unit (where processing power is very limited (but nothing else besides the program needs to run...))?
Well, in fact it's not a question about C++, but rather the answer depends on CPU architecture / Host OS / delay() implementation.
If it's a multi-tasking environment then delay() could (and probably will) help to the OS scheduler to make its job more effectively. However the real difference could be too little to notice (except old cooperative multi-tasking where delay() is a must).
If it's a single-task environment (possibly some microcontroller) then delay() could still be useful if the underlying implementation is able to execute some dedicated low power consumption instructions instead of your ordinary loop. But, of course, there's no guarantee it will, unless your manual explicitly states so.
Considering performance issues, well, it's obvious that you can receive and process an event with a significant delay (or even miss it completely), but if you believe it's not a case then there are no other cons against delay().
You will make your code much harder to read and you are doing asynchronism the old style way: you explicitely wait for something to happen, instead of relying on mechanism that do the job for you. Also, you delay by 50ms. Is it always optimal? Does it depend on which programs are running? In C++11 you can use condition_variable. This allows you to wait for an event to happen, without coding the waiting loops.
Documentation here: http://en.cppreference.com/w/cpp/thread/condition_variable
I have adapted the example to make it simpler to understand. Just waiting for a single event.
Here is an example for you, adapted to your context
// Example program
#include <iostream>
#include <string>
#include <iostream>
#include <string>
#include <thread>
#include <mutex>
#include <chrono>
#include <condition_variable>
std::mutex m;
std::condition_variable cv;
std::string data;
bool ready = false;
bool processed = false;
using namespace std::chrono_literals;
void worker_thread()
{
// Wait until main() sends data
std::unique_lock<std::mutex> lk(m);
std::cout << "Worker thread starts processing data\n";
std::this_thread::sleep_for(10s);//simulates the work
data += " after processing";
// Send data back to main()
processed = true;
std::cout << "Worker thread signals data processing completed"<<std::endl;
std::cout<<"Corresponds to you getEvent()==quit"<<std::endl;
// Manual unlocking is done before notifying, to avoid waking up
// the waiting thread only to block again (see notify_one for details)
lk.unlock();
cv.notify_one();
}
int main()
{
data = "Example data";
std::thread worker(worker_thread);
// wait for the worker
{
std::unique_lock<std::mutex> lk(m);
//this means I wait for the processing to be finished and I will be woken when it is done.
//No explicit waiting
cv.wait(lk, []{return processed;});
}
std::cout<<"data processed"<<std::endl;
}
In my experience, you must do something that will relinquish the processor. sleep works OK, and on most windows systems even sleep(1) is adequate to completely unload the processor in a loop.
You can get the best of all worlds, however, if you use something like std::condition_variable. It is possible to come up with constructions using condition variables (similar to 'events' and WaitForSingleObject in Windows API).
One thread can block on a condition variable that is released by another thread. This way, one thread can do condition_varaible.wait(some_time), and it will either wait for the timeout period (without loading the processor), or it will continue execution immediately when another thread releases it.
I use this method where one thread is sending messages to another thread. I want the receiving thread to respond as soon as possible, not after waiting for a sleep(20) to complete. The receiving thread has a condition_variable.wait(20), for example. The sending thread sends a message, and does a corresponding condition_variable.release(). The receiving thread will immediately release and process the message.
This solution gives very fast response to messages, and does not unduly load the processor.
If you don't care about portability, and you happen to be using windows, events and WaitForSingleObject do the same thing.
your loop would look something like:
while(!done)
{
cond_var.wait(std::chrono::milliseconds(20));
// process messages...
msg = dequeue_message();
if(msg == done_message)
done = true;
else
process_message(msg);
}
In another thread...
send_message(string msg)
{
enqueue_message(msg);
cond_var.release();
}
Your message processing loop will spend most if it's time idle, waiting on the condition variable. When a message is sent, and the condition variable is released by the send thread, your receive thread will immediately respond.
This allows your receive thread to loop at a minimum rate set by the wait time, and a maximum rated determined by the sending thread.
What you are asking is how to properly implement an Event Loop. Use OS calls. You ask the OS for event or message. If no message is present, the OS simply sends the process to sleep. In a micro-controller environment you probably don't have an OS. There the concept of interrupts has to be used, which pretty much an "message" (or event) on lower level.
And for microcontrollers you don't have concepts like sleeping or interrupts, so you end with just looping.
In your example, a properly implemented getEvent()
should block and do nothing until something actually happens, e.g. a key press.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With