Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

using multiple io_service objects

Tags:

boost-asio

I have my application in which listen and process messages from both internet sockets and unix domain sockets. Now I need to add SSL to the internet sockets, I was using a single io_service object for all the sockets in the application. It seems now I need to add separate io_service objects for network sockets and unix domain sockets. I don't have any threads in my application and I use async_send and async_recieve and async_accept to process data and connections. Please point me to any examples using multiple io_service objects with async handlers.

like image 855
Ravikumar Tulugu Avatar asked Mar 19 '13 10:03

Ravikumar Tulugu


2 Answers

The question has a degree of uncertainty as if multiple io_service objects are required. I could not locate anything in the reference documentation, or the overview for SSL and UNIX Domain Sockets that mandated separate io_service objects. Regardless, here are a few options:


Single io_service:

Try to use a single io_service.

If you do not have a direct handle to the io_service object, but you have a handle to a Boost.Asio I/O object, such as a socket, then a handle to the associated io_service object can be obtained by calling socket.get_io_service().


Use a thread per io_service:

If multiple io_service objects are required, then dedicate a thread to each io_service. This approach is used in Boost.Asio's HTTP Server 2 example.

boost::asio::io_service service1;
boost::asio::io_service service2;

boost::thread_group threads;
threads.create_thread(boost::bind(&boost::asio::io_service::run, &service1));
service2.run();
threads.join_all();

One consequence of this approach is that the it may require thread-safety guarantees to be made by the application. For example, if service1 and service2 both have completion handlers that invoke message_processor.process(), then message_processor.process() needs to either be thread-safe or called in a thread-safe manner.


Poll io_service:

io_service provides non-blocking alternatives to run(). Where as io_service::run() will block until all work has finished, io_service::poll() will run handlers that are ready to run and will not block. This allows for a single thread to execute the event loop on multiple io_service objects:

while (!service1.stopped() &&
       !service2.stopped())
{
  std::size_t ran = 0;
  ran += service1.poll();
  ran += service2.poll();
  // If no handlers ran, then sleep.
  if (0 == ran)
  {
    boost::this_thread::sleep_for(boost::chrono::seconds(1));
  }
}

To prevent a tight-busy loop when there are no ready-to-run handlers, it may be worth adding in a sleep. Be aware that this sleep may introduce latency in the overall handling of events.


Transfer handlers to a single io_service:

One interesting approach is to use a strand to transfer completion handlers to a single io_service. This allows for a thread per io_service, while preventing the need to have the application make thread-safety guarantees, as all completion handlers will post through a single service, whose event loop is only being processed by a single thread.

boost::asio::io_service service1;
boost::asio::io_service service2;

// strand2 will be used by service2 to post handlers to service1.
boost::asio::strand strand2(service1);
boost::asio::io_service::work work2(service2);

socket.async_read_some(buffer, strand2.wrap(read_some_handler));

boost::thread_group threads;
threads.create_thread(boost::bind(&boost::asio::io_service::run, &service1));
service2.run();
threads.join_all();

This approach does have some consequences:

  • It requires handlers that are intended to by ran by the main io_service to be wrapped via strand::wrap().
  • The asynchronous chain now runs through two io_services, creating an additional level of complexity. It is important to account for the case where the secondary io_service no longer has work, causing its run() to return.

It is common for an asynchronous chains to occur within the same io_service. Thus, the service never runs out of work, as a completion handler will post additional work onto the io_service.

   |    .------------------------------------------.
   V    V                                          |
read_some_handler()                                |
{                                                  |
  socket.async_read_some(..., read_some_handler) --'
}

On the other hand, when a strand is used to transfer work to another io_service, the wrapped handler is invoked within service2, causing it to post the completion handler into service1. If the wrapped handler was the only work in service2, then service2 no longer has work, causing servce2.run() to return.

    service1                      service2
====================================================

        .----------------- wrapped(read_some_handler)
        |                            .
        V                            .
 read_some_handler                NO WORK
        |                            .
        |                            .
        '----------------> wrapped(read_some_handler)

To account for this, the example code uses an io_service::work for service2 so that run() remains blocked until explicitly told to stop().

like image 62
Tanner Sansbury Avatar answered Sep 22 '22 20:09

Tanner Sansbury


Looks like you are writing a server and not a client. Don't know if this helps, but I am using ASIO to communicate with 6 servers from my client. It uses TCP/IP SSL/TSL. You can find a link to the code here

You should be able to use just one io_service object with multiple socket objects. But, if you decide that you really want to have multiple io_service objects, then it should be fairly easy to do so. In my class, the io_service object is static. So, just remove the static keyword along with the logic in the constructor that only creates one instance of the io_service object. Depending on the number of connections expected for your server, you would probably be better off using a thread pool dedicated to handling socket I/O rather than creating a thread for each new socket connection.

like image 41
Bob Bryan Avatar answered Sep 22 '22 20:09

Bob Bryan