As far as I know, the main difference between synchronous and asynchronous operations (i.e. write()
or read()
vs async_write()
and async_read()
) The former ones don't return until the operation finish -or error, and the later ones, returns immediately.
Due the fact that the asynchronous operations are controlled by an io_service.run()
that does not finish until the controlled operations has finalized. It seems to me that in sequential operations as those involved in TCP/IP connections with protocols such as POP3, in which the operation is a sequence such as:
C: <connect>
S: Ok.
C: User...
S: Ok.
C: Password
S: Ok.
C: Command
S: answer
C: Command
S: answer
...
C: bye
S: <close>
The difference between synchronous/asynchronous operators does not make much sense.
Of course, in both operations there is always the risk that the program flow stops indefinitely by some circumstance -there the use of timers-, but I would like know some more authorized opinions in this matter.
I must admit that the question is rather ill-defined, but I would like to hear some advice about when to use one or the other. I've run into problems when debugging with MS Visual Studio regarding asynchronous SSL operations in a POP3 client on which I'm now working, and sometimes think that perhaps it is a bad idea use asynchronous in this.
The Boost.Asio documentation really does a fantastic job explaining the two concepts. As Ralf mentioned, Chris also has a great blog describing asynchronous concepts. The parking meter example explaining how timeouts work is particularly interesting, as is the bind illustrated example.
First, consider a synchronous connect operation:
The control flow is fairly straightforward here, your program invokes some API (1) to connect a socket. The API uses an I/O service (2) to perform the operation in the operating system (3). Once this operation is complete (4 and 5), control returns to your program immediately afterwards (6) with some indication of success or failure.
The analogous asynchronous operation has a completely different control flow:
Here, your application initiates the operation (1) using the same I/O service (2), but the control flow is inverted. Completion of the operation causes the I/O service to notify your program through a completion handler. The time between step 3 and when the operation has completed was contained entirely within the connect operation for the synchronous case.
You can see the synchronous case is naturally easier for most programmers to grasp because it represents the traditional control flow paradigms. The inverted control flow used by asynchronous operations is difficult to understand, it often forces your program to split up operations into start
and handle
methods where the logic is shifted around. However, once you have a basic understanding of this control flow you'll realize how powerful the concept really is. Some of the advantages of asynchronous programming are:
Decouples threading from concurrency. Take a long running operation, for the synchronous case you would often create a separate thread to handle the operation to prevent an application's GUI from becoming unresponsive. This concept works fine at a small scale, but quickly falls apart at a handful of threads.
Increased Performance. The thread-per-connection design simply does not scale. See the C10K problem.
Composition (or Chaining). Higher level operations can be composed of multiple completion handlers. Consider transferring a JPEG image, the protocol might dictate the first 40 bytes include a header describing the image size, shape, maybe some other information. The first completion handler to send this header can initiate the second operation to send the image data. The higher level operation sendImage()
does not need to know, or care, about the method chaining used to implement the data transfer.
Timeouts and cancel-ability. There are platform specific ways to timeout a long running operation (ex: SO_RCVTIMEO
and SO_SNDTIMEO
). Using asynchronous operations enables the usage of deadline_timer
canceling long running operations on all supported platforms.
Of course, in both operations there is allways the risk that the program flow stops indefinitely by some circunstance -there the use of timers-, but I would like know some more authorized opinions in this matter.
My personal experience using Asio stems from the scalability aspect. Writing software for supercomputers requires a fair amount of care when dealing with limited resources such as memory, threads, sockets, etc. Using a thread-per-connection for ~2 million simultaneous operations is a design that is dead on arrival.
I suppose the choice of synchronous/asynchronous is very application specific. I agree that the asynchronous paradigm can make the code as well as the debugging a lot more complex, but it does have its benefits.
To illustrate, the main reason why we switched from synchronous IO to boost asio using async IO is that in our application blocking IO was just not an option, we have a multimedia streaming server in which I was streaming media packets to multiple clients after having being encoded. The problem was that network issues resulted in the whole capture-encoding-deliver pipeline being effectively stalled (e.g. if the connection to a single client failed).
To summarize, in my (ltd) experience with asynchronous IO, it can be useful in situations where you have other work that needs to be done while you wait for the IO to complete (such as serving other clients, etc). In systems or scenarios, where you have to wait for the result of the IO to continue, it would be much simpler to just use synchronous IO.
It would also make sense in duplex communication systems (e.g more complex protocols such as SIP, RTSP where both client and server can send requests). It's been a while since I've dealt with POP but for the simple exchange in your example, async IO could be considered overkill. I would switch to async IO only once I was sure that sync IO isn't sufficient to meet my requirements.
WRT to the boost asio documentation, I found that the best way to get the hang of it was to work through the examples. Also, a link you might want to check out is http://en.highscore.de/cpp/boost/index.html It's got a really nice chapter on boost asio. Also Chris Kohlhoff's (author of asio) blog has some really excellent articles worth checking out.
synchronous is easy to control the program flow.
asynchronous has better performance since it need not save/restore registers for fiber tasks.
asynchronous uses callback and hard to programmer. We can try promise-cpp to make the asynchronous flow like synchronous --
Example of http client --
//<1> Resolve the host
async_resolve(session->resolver_, host, port)
.then([=](tcp::resolver::results_type &results) {
//<2> Connect to the host
return async_connect(session->socket_, results);
}).then([=]() {
//<3> Write the request
return async_write(session->socket_, session->req_);
}).then([=](std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
//<4> Read the response
return async_read(session->socket_, session->buffer_, session->res_);
}).then([=](std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
//<5> Write the message to standard out
std::cout << session->res_ << std::endl;
}).then([]() {
//<6> success, return default error_code
return boost::system::error_code();
}, [](const boost::system::error_code err) {
//<6> failed, return the error_code
return err;
}).then([=](boost::system::error_code &err) {
//<7> Gracefully close the socket
std::cout << "shutdown..." << std::endl;
session->socket_.shutdown(tcp::socket::shutdown_both, err);
});
Full code here
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With