I am learning to use Boost ASIO. Here is some code copied from the the chat example given along with Boost ASIO documentation,
typedef std::deque<chat_message> chat_message_queue;
class chat_client
{
public:
chat_client(boost::asio::io_service& io_service,
tcp::resolver::iterator endpoint_iterator)
: io_service_(io_service),
socket_(io_service)
{
boost::asio::async_connect(socket_, endpoint_iterator,
boost::bind(&chat_client::handle_connect, this,
boost::asio::placeholders::error));
}
void write(const chat_message& msg)
{
io_service_.post(boost::bind(&chat_client::do_write, this, msg));
}
void close()
{
io_service_.post(boost::bind(&chat_client::do_close, this));
}
private:
void handle_connect(const boost::system::error_code& error)
{
//Implementation
}
void handle_read_header(const boost::system::error_code& error)
{
//Implementation
}
void handle_read_body(const boost::system::error_code& error)
{
//Implementation
}
void do_write(chat_message msg)
{
bool write_in_progress = !write_msgs_.empty();
write_msgs_.push_back(msg);
if (!write_in_progress)
{
boost::asio::async_write(socket_,
boost::asio::buffer(write_msgs_.front().data(),
write_msgs_.front().length()),
boost::bind(&chat_client::handle_write, this,
boost::asio::placeholders::error));
}
}
void handle_write(const boost::system::error_code& error)
{
//Implementation
}
void do_close()
{
socket_.close();
}
private:
boost::asio::io_service& io_service_;
tcp::socket socket_;
chat_message read_msg_;
chat_message_queue write_msgs_;
};
The writes are asynchronous and there is no use of locks around the member variables write_msgs_
and read_msgs_
. Shouldn't there be an issue with concurrency here?
Is it safe to make calls to post
from the thread that runs io_service::run?
What about dispatch
? What about making the same calls from threads that are not running io_service::run
?
In doSend()
, why are they pushing the message into write_msgs_
instead of directly sending it? Also in the same function why are they checking if the write_msgs_
was empty and only if it was not, proceeding to send? Does write_msgs_.empty() = false
mean a write is going on? How?
If do_write()
gets invoked only in one thread then why do I need a queue to maintain a sequence of send? Wouldn't the io_service
finish the tasks at hand and then do the asynchronous operation called by do_write
? Will using a dispatch
instead of post
make a difference in the example mentioned above?
Although the writes are asynchronous, there's no multithreading here: do_write()
gets invoked in one thread. Of course, the buffer being sent must be alive and unchanged until the completion handler is invoked.
It's safe to call post() and dispatch() from any thread. Read "thread safety" section of io_service documentation.
If async_write
is in progress, and you call async_write
on the same socket again, the order in which the data would be sent is undefined. In other words, the data will be messed up. The most simple way to work-around this is to make a queue of messages: every time async_write
gets completed, issue another async_write
. (By the way, the same is applicable to async_read
.)
Why does write_msgs_.empty() = false
mean a write is going on? Because as long as write_msgs_
is non-empty, handle_write
(completion handler of the previous async_write
) issues another async_write
. This loop breaks when write_msgs_
is empty.
Please read what the documentation says about async_write:
This operation is implemented in terms of zero or more calls to the stream's async_write_some function, and is known as a composed operation. The program must ensure that the stream performs no other write operations (such as async_write, the stream's async_write_some function, or any other composed operations that perform writes) until this operation completes.
As for dispatch
vs post
- as far as I see, in the above example they are interchangeable. Using post is essential if we don't want the functor being posted to be invoked synchronously.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With