Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

boost::asio async server design

Currently I'm using design when server reads first 4 bytes of stream then read N bytes after header decoding.

But I found that time between first async_read and second read is 3-4 ms. I just printed in console timestamp from callbacks for measuring. I sent 10 bytes of data in total. Why it takes so much time to read?

I running it in debug mode but I think that 1 connection for debug is not so much to have a 3 ms delay between reads from socket. Maybe I need another approach to cut TCP stream on "packets"?

UPDATE: I post some code here

void parseHeader(const boost::system::error_code& error)
        {
            cout<<"[parseHeader] "<<lib::GET_SERVER_TIME()<<endl;
            if (error) {
                close();
                return;
            }
            GenTCPmsg::header result = msg.parseHeader();
            if (result.error == GenTCPmsg::parse_error::__NO_ERROR__) {
                msg.setDataLength(result.size);
                boost::asio::async_read(*socket, 
                    boost::asio::buffer(msg.data(), result.size),
                    (*_strand).wrap(
                    boost::bind(&ConnectionInterface::parsePacket, shared_from_this(), boost::asio::placeholders::error)));
            } else {
                close();
            }
        }
        void parsePacket(const boost::system::error_code& error)
        {
            cout<<"[parsePacket] "<<lib::GET_SERVER_TIME()<<endl;
            if (error) {
                close();
                return;
            }
            protocol->parsePacket(msg);
            msg.flush();
            boost::asio::async_read(*socket, 
                boost::asio::buffer(msg.data(), config::HEADER_SIZE),
                (*_strand).wrap(
                boost::bind(&ConnectionInterface::parseHeader, shared_from_this(), boost::asio::placeholders::error)));
        }

As you see unix timestamps differ in 3-4 ms. I want to understand why so many time elapse between parseHeader and parsePacket. This is not a client problem, summary data is 10 bytes, but i cant sent much much more, delay is exactly between calls. I'm using flash client version 11. What i do is just send ByteArray through opened socket. I don't sure that delays on client. I send all 10 bytes at once. How can i debug where actual delay is?

like image 853
Denis Ermolin Avatar asked Sep 02 '12 17:09

Denis Ermolin


2 Answers

There are far too many unknowns to identify the root cause of the delay from the posted code. Nevertheless, there are a few approaches and considerations that can be taken to help to identify the problem:

  • Enable handler tracking for Boost.Asio 1.47+. Simply define BOOST_ASIO_ENABLE_HANDLER_TRACKING and Boost.Asio will write debug output, including timestamps, to the standard error stream. These timestamps can be used to help filter out delays introduced by application code (parseHeader(), parsePacket(), etc.).
  • Verify that byte-ordering is being handled properly. For example, if the protocol defines the header's size field as two bytes in network-byte-order and the server is handling the field as a raw short, then upon receiving a message that has a body size of 10:
    • A big-endian machine will call async_read reading 10 bytes. The read operation should complete quickly as the socket already has the 10 byte body available for reading.
    • A little-endian machine will call async_read reading 2560 bytes. The read operation will likely remain outstanding, as far more bytes are trying to be read than is intended.
  • Use tracing tools such as strace, ltrace, etc.
  • Modify Boost.Asio, adding timestamps throughout the callstack. Boost.Asio is shipped as a header-file only library. Thus, users may modify it to provide as much verbosity as desired. While not the cleanest or easiest of approaches, adding a print statement with timestamps throughout the callstack may help provide visibility into timing.
  • Try duplicating the behavior in a short, simple, self contained example. Start with the simplest of examples to determine if the delay is systamtic. Then, iteratively expand upon the example so that it becomes closer to the real-code with each iteration.

Here is a simple example from which I started:

#include <iostream>

#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/make_shared.hpp>
#include <boost/shared_ptr.hpp>

class tcp_server
  : public boost::enable_shared_from_this< tcp_server >
{
private:

  enum 
  {
     header_size = 4,
     data_size   = 10,
     buffer_size = 1024,
     max_stamp   = 50
  };

  typedef boost::asio::ip::tcp tcp;

public:

  typedef boost::array< boost::posix_time::ptime, max_stamp > time_stamps;

public:

  tcp_server( boost::asio::io_service& service,
              unsigned short port )
    : strand_( service ),
      acceptor_( service, tcp::endpoint( tcp::v4(), port ) ),
      socket_( service ),
      index_( 0 )
  {}

  /// @brief Returns collection of timestamps.
  time_stamps& stamps()
  {
    return stamps_;
  }

  /// @brief Start the server.
  void start()
  {
    acceptor_.async_accept( 
      socket_,
      boost::bind( &tcp_server::handle_accept, this,
                   boost::asio::placeholders::error ) );
  }

private:

  /// @brief Accept connection.
  void handle_accept( const boost::system::error_code& error ) 
  {
    if ( error )
    {  
      std::cout << error.message() << std::endl;
      return;
    }

    read_header();
  }

  /// @brief Read header.
  void read_header()
  {
    boost::asio::async_read(
      socket_,
      boost::asio::buffer( buffer_, header_size ),
      boost::bind( &tcp_server::handle_read_header, this,
                   boost::asio::placeholders::error,
                   boost::asio::placeholders::bytes_transferred ) );
  }

  /// @brief Handle reading header.
  void
  handle_read_header( const boost::system::error_code& error,
                      std::size_t bytes_transferred )
  {
    if ( error )
    {  
      std::cout << error.message() << std::endl;
      return;
    }

    // If no more stamps can be recorded, then stop the async-chain so
    // that io_service::run can return.
    if ( !record_stamp() ) return;

    // Read data.
    boost::asio::async_read(
      socket_,
      boost::asio::buffer( buffer_, data_size ),
      boost::bind( &tcp_server::handle_read_data, this,
                   boost::asio::placeholders::error,
                   boost::asio::placeholders::bytes_transferred ) );

  }

  /// @brief Handle reading data.
  void handle_read_data( const boost::system::error_code& error,
                         std::size_t bytes_transferred )
  {
    if ( error )
    {  
      std::cout << error.message() << std::endl;
      return;
    }

    // If no more stamps can be recorded, then stop the async-chain so
    // that io_service::run can return.
    if ( !record_stamp() ) return;

    // Start reading header again.
    read_header();
  }

  /// @brief Record time stamp.
  bool record_stamp()
  {
    stamps_[ index_++ ] = boost::posix_time::microsec_clock::local_time();

    return index_ < max_stamp;
  }

private:
  boost::asio::io_service::strand strand_;
  tcp::acceptor acceptor_;
  tcp::socket socket_;
  boost::array< char, buffer_size > buffer_;
  time_stamps stamps_;
  unsigned int index_;
};


int main()
{
  boost::asio::io_service service;

  // Create and start the server.
  boost::shared_ptr< tcp_server > server =
    boost::make_shared< tcp_server >( boost::ref(service ), 33333 );  
  server->start();

  // Run.  This will exit once enough time stamps have been sampled.
  service.run();

  // Iterate through the stamps.
  tcp_server::time_stamps& stamps = server->stamps();
  typedef tcp_server::time_stamps::iterator stamp_iterator;
  using boost::posix_time::time_duration;
  for ( stamp_iterator iterator = stamps.begin() + 1,
                       end      = stamps.end();
        iterator != end;
        ++iterator )
  {
     // Obtain the delta between the current stamp and the previous.
     time_duration delta = *iterator - *(iterator - 1);
     std::cout << "Delta: " << delta.total_milliseconds() << " ms"
               << std::endl;
  }
  // Calculate the total delta.
  time_duration delta = *stamps.rbegin() - *stamps.begin();
  std::cout <<    "Total" 
            << "\n  Start: " << *stamps.begin()
            << "\n  End:   " << *stamps.rbegin()
            << "\n  Delta: " << delta.total_milliseconds() << " ms"
            << std::endl;
}

A few notes about the implementation:

  • There is only one thread (main) and one asynchronous chain read_header->handle_read_header->handle_read_data. This should minimize the amount of time a ready-to-run handler spends waiting for an available thread.
  • To focus on boost::asio::async_read, noise is minimized by:
    • Using a pre-allocated buffer.
    • Not using shared_from_this() or strand::wrap.
    • Recording the timestamps, and perform processing post-collection.

I compiled on CentOS 5.4 using gcc 4.4.0 and Boost 1.50. To drive the data, I opted to send 1000 bytes using netcat:

$ ./a.out > output &
[1] 18623
$ echo "$(for i in {0..1000}; do echo -n "0"; done)" | nc 127.0.0.1 33333
[1]+  Done                    ./a.out >output
$ tail output
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Total
  Start: 2012-Sep-10 21:22:45.585780
  End:   2012-Sep-10 21:22:45.586716
  Delta: 0 ms

Observing no delay, I expanded upon the example by modifying the boost::asio::async_read calls, replacing this with shared_from_this() and wrapping the ReadHandlerss with strand_.wrap(). I ran the updated example and still observed no delay. Unfortunately, that is as far as I could get based on the code posted in the question.

Consider expanding upon the example, adding in a piece from the real implementation with each iteration. For example:

  • Start with using the msg variable's type to control the buffer.
  • Next, send valid data, and introduce parseHeader() and parsePacket functions.
  • Finally, introduce the lib::GET_SERVER_TIME() print.

If the example code is as close as possible to the real code, and no delay is being observed with boost::asio::async_read, then the ReadHandlers may be ready-to-run in the real code, but they are waiting on synchronization (the strand) or a resource (a thread), resulting in a delay:

  • If the delay is the result of synchronization with the strand, then consider Robin's suggestion by reading a larger block of data to potentially reduce the amount of reads required per-message.
  • If the delay is the result of waiting for a thread, then consider having an additional thread call io_service::run().
like image 55
Tanner Sansbury Avatar answered Oct 17 '22 16:10

Tanner Sansbury


One thing that makes Boost.Asio awesome is using the async feature to the fullest. Relying on a specific number of bytes read in one batch, possibly ditching some of what could already been read, isn't really what you should be doing.

Instead, look at the example for the webserver especially this: http://www.boost.org/doc/libs/1_51_0/doc/html/boost_asio/example/http/server/connection.cpp

A boost triboolean is used to either a) complete the request if all data is available in one batch, b) ditch it if it's available but not valid and c) just read more when the io_service chooses to if the request was incomplete. The connection object is shared with the handler through a shared pointer.

Why is this superior to most other methods? You can possibly save the time between reads already parsing the request. This is sadly not followed through in the example but idealy you'd thread the handler so it can work on the data already available while the rest is added to the buffer. The only time it's blocking is when the data is incomplete.

Hope this helps, can't shed any light on why there is a 3ms delay between reads though.

like image 5
Robin Avatar answered Oct 17 '22 18:10

Robin