I'm thinking about writing a custom Asio service on top of an existing proprietary 3rd party networking protocol that we are currently using.
According to Highscore Asio guide you need to implement three classes to create a custom Asio service:
boost::asio::basic_io_object
representing the new I/O object.boost::asio::io_service::service
representing a service that is registered with the I/O service and can be accessed from the I/O object.The network protocol implementation already provides asynchronous operations and has a (blocking) event-loop. So I thought, I would put it into my service implementation class and run the event-loop in an internal worker thread. So far so good.
Looking at some examples of custom services, I noticed that the service classes spawn their own internal threads (in fact they instantiate their own internal io_service instances). For example:
The Highscore page provides a directory monitor example. It is essentially a wrapper around inotify. The interesting classes are inotify/basic_dir_monitor_service.hpp
and inotify/dir_monitor_impl.hpp
. Dir_monitor_impl
handles the actual interaction with inofity, which is blocking, and therefore runs in a background thread. I agree with that. But the basic_dir_monitor_service
also has an internal worker thread and all that seems to be doing is shuffling requests between main's io_service
and the dir_monitor_impl
. I played around with the code, removed the worker thread in basic_dir_monitor_service
and instead posted requests directly to the main io_service and the program still ran as before.
In Asio's custom logger service example, I've noticed the same approach. The logger_service
spawns an internal worker thread to handle the logging requests. I haven't had time to play around with that code, but I think, it should be possible to post these requests directly to the main io_service as well.
What is the advantage of having these "intermediary workers"? Couldn't you post all work to the main io_service all the time? Did I miss some crucial aspect of the Proactor pattern?
I should probably mention that I'm writing software for an underpowered single-core embedded system. Having these additional threads in place just seems to impose unnecessary context switches which I'd like to avoid if possible.
At its core, Boost Asio provides a task execution framework that you can use to perform operations of any kind. You create your tasks as function objects and post them to a task queue maintained by Boost Asio. You enlist one or more threads to pick these tasks (function objects) and invoke them.
The boost::asio::bind_executor() function is a helper to bind a specific executor object, such as a strand, to a completion handler. This binding automatically associates an executor as shown above. For example, to bind a strand to a completion handler we would simply write: my_socket.
Asio's io_service is the facilitator for operating on asynchronous functions. Once an async operation is ready, it uses one of io_service 's running threads to call you back. If no such thread exists it uses its own internal thread to call you. Think of it as a queue containing operations.
If threads are used, several functions can be executed concurrently on available CPU cores. Boost. Asio with threads improves the scalability because your program can take advantage of internal and external devices that can execute operations independently or in cooperation with each other.
In short, consistency. The services attempt to meet user expectations set forth by the services Boost.Asio provides.
Using an internal io_service
provides a clear separation of ownership and control of handlers. If a custom service posts its internal handlers into the user's io_service
, then execution of the service's internal handlers becomes implicitly coupled with the user's handlers. Consider how this would impact user expectations with the Boost.Asio Logger Service example:
logger_service
writes to the file stream within a handler. Thus, a program that never processes the io_service
event loop, such as one that only uses the synchronous API, would never have log messages written.logger_service
would no longer be thread-safe, potentially invoking undefined behavior if the io_service
is processed by multiple threads.logger_service
's internal operations is constrained by that of the io_service
. For example, when a service's shutdown_service()
function is invoked, the lifetime of the owning io_service
has already ended. Hence, messages could not be logged via logger_service::log()
within shutdown_service()
, as it would attempt to post an internal handler into the io_service
whose lifetime has already ended.The user may no longer assume a one-to-one mapping between an operation and handler. For example:
boost::asio::io_service io_service; debug_stream_socket socket(io_service); boost::asio::async_connect(socket, ..., &connect_handler); io_service.poll(); // Can no longer assume connect_handler has been invoked.
In this case, io_service.poll()
may invoke the handler internal to the logger_service
, rather than connect_handler()
.
Furthermore, these internal threads attempt to mimic the behavior used internally by Boost.Asio itself:
The implementation of this library for a particular platform may make use of one or more internal threads to emulate asynchronicity. As far as possible, these threads must be invisible to the library user.
In the directory monitor example, an internal thread is used to prevent indefinitely blocking the user's io_service
while waiting for an event. Once an event has occurred, the completion handler is ready to be invoked, so the internal thread post the user handler into the user's io_service
for deferred invocation. This implementation emulates asynchronicity with an internal thread that is mostly invisible to the user.
For details, when an asynchronous monitor operation is initiated via dir_monitor::async_monitor()
, a basic_dir_monitor_service::monitor_operation
is posted into the internal io_service
. When invoked, this operation invokes dir_monitor_impl::popfront_event()
, a potentially blocking call. Hence, if the monitor_operation
is posted into the user's io_service
, the user's thread could be indefinitely blocked. Consider the affect on the following code:
boost::asio::io_service io_service; boost::asio::dir_monitor dir_monitor(io_service); dir_monitor.add_directory(dir_name); // Post monitor_operation into io_service. dir_monitor.async_monitor(...); io_service.post(&user_handler); io_service.run();
In the above code, if io_service.run()
invokes monitor_operation
first, then user_handler()
will not be invoked until dir_monitor
observes an event on the dir_name
directory. Therefore, dir_monitor
service's implementation would not behave in a consistent manner that most users expect from other services.
The use of an internal thread and io_service
:
std::ofstream
, as only the single internal thread writes to the stream. If logging was done directly within logger_service::log()
or if logger_service
posted its handlers into the user's io_service
, then explicit synchronization would be required for thread-safety. Other synchronization mechanisms may introduce greater overhead or complexity into the implementation.Allows for services
to log messages within shutdown_service()
. During destruction, the io_service
will:
io_service
or any of its associated strand
s.
As the lifetime of the user's io_service
has ended, its event queue is neither being processed nor can additional handlers be posted. By having its own internal io_service
that is processed by its own thread, logger_service
enables other services to log messages during their shutdown_service()
.
When implementing a custom service, here are a few points to consider:
For the last two points, the dir_monitor
I/O object exhibits behavior that users may not expect. As the single thread within the service invokes a blocking operation on a single implementation's event queue, it effectively blocks operations that could potentially complete immediately for their respective implementation:
boost::asio::io_service io_service; boost::asio::dir_monitor dir_monitor1(io_service); dir_monitor1.add_directory(dir_name1); dir_monitor1.async_monitor(&handler_A); boost::asio::dir_monitor dir_monitor2(io_service); dir_monitor2.add_directory(dir_name2); dir_monitor2.async_monitor(&handler_B); // ... Add file to dir_name2. { // Use scope to enforce lifetime. boost::asio::dir_monitor dir_monitor3(io_service); dir_monitor3.add_directory(dir_name3); dir_monitor3.async_monitor(&handler_C); } io_service.run();
Although the operations associated with handler_B()
(success) and handler_C()
(aborted) would not block, the single thread in basic_dir_monitor_service
is blocked waiting for a change to dir_name1
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With