In the following code, I expect the output to always be 1, because I am expecting only one handler to run when poll_one()
is called. However, once in about 300 times, the output is actually 3. Based on my understanding of the boost library, this seems incorrect. Is the non-deterministic behavior a bug or expected?
#include <boost/asio.hpp>
int main() {
boost::asio::io_service io;
boost::asio::io_service::work io_work(io);
boost::asio::io_service::strand strand1(io);
boost::asio::io_service::strand strand2(io);
int val = 0;
strand1.post([&val, &strand2]() {
val = 1;
strand2.post([&val]() {
val = 2;
});
boost::asio::spawn(strand2, [&val](boost::asio::yield_context yield) {
val = 3;
});
});
io.poll_one();
std::cout << "Last executed: " << val << std::endl;
return 0;
}
Using boost-asio 1.60.0.6
The observed behavior is well defined and expected to occur, but one should not expected it to occur often.
Asio has a limited pool of strand implementations, and the default allocation strategy for strands is hashing. If a hash collision occurs, two strands will use the same implementation. When a hash-collision occurs, the example simplifies to the following demo:
#include <cassert>
#include <boost/asio.hpp>
int main()
{
boost::asio::io_service io_service;
boost::asio::io_service::strand strand1(io_service);
// Have strand2 use the same implementation as strand1.
boost::asio::io_service::strand strand2(strand1);
int value = 0;
auto handler1 = [&value, &strand1, &strand2]() {
assert(strand1.running_in_this_thread());
assert(strand2.running_in_this_thread());
value = 1;
// handler2 is queued into strand and never invoked.
auto handler2 = [&value]() { assert(false); };
strand2.post(handler2);
// handler3 is immediately executed.
auto handler3 = [&value]() { value = 3; };
strand2.dispatch(handler3);
assert(value == 3);
};
// Enqueue handler1.
strand1.post(handler1);
// Run the event processing loop, executing handler1.
assert(io_service.poll_one() == 1);
}
In the above example:
io_service.poll_one()
executes a single ready handler (handler1
)handler2
is never invokedhandler3
is invoked immediately within strand2.dispatch()
, as strand2.dispatch()
is invoked from within a handler where strand2.running_in_this_thread()
returns true
There are various details contributing to the observed behavior:
io_service::poll_one()
will run the io_service
's event loop and without blocking, it will execute at most one ready to run handler. Handlers executed immediately within the context of a dispatch()
are never enqueued into the io_service
, and are not subject to poll_one()
's limit of invoking a single handler.
The boost::asio::spawn(strand, function)
overload starts a stackful coroutine as-if by strand.dispatch()
:
strand.running_in_this_thread()
returns false
for the caller, then the coroutine will be posted into the strand
for deferred invocationstrand.running_in_this_thread()
returns true
for the caller, then the coroutine will be executed immediatelyDiscrete strand
objects that use the same implementation still maintain the guarantees of a strand. Namely, concurrent execution will not occur and the order of handler invocation is well defined. When discrete strand
objects are using discrete implementations, and multiple threads are running the io_service
, then one may observe the discrete strands executing concurrently. However, when discrete strand
objects use the same implementation, one will not observe concurrency even if multiple threads are running the io_service
. This behavior is documented:
The implementation makes no guarantee that handlers posted or dispatched through different strand objects will be invoked concurrently.
Asio has a limited pool of strand implementations. The current default is 193
and can be controlled by defining BOOST_ASIO_STRAND_IMPLEMENTATIONS
to the desired number. This feature is noted in the Boost.Asio 1.48 release notes
Made the number of strand implementations configurable by defining
BOOST_ASIO_STRAND_IMPLEMENTATIONS
to the desired number.
By decreasing the pool size, one increases the chance that two discrete strands will use the same implementation. With the original code, if one was to set the pool size to 1
, then strand1
and strand2
will always use the same implementation, resulting in val
always being 3
(demo).
The default strategy for allocating strand implementations is to use a golden-ratio hash. As a hashing algorithm is used, there is a potential for collisions, resulting in the same implementation being used for multiple discrete strand
objects. By defining BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION
, one can change the allocation strategy to round-robin, preventing a collision from occurring until BOOST_ASIO_STRAND_IMPLEMENTATIONS + 1
strand allocations have occurred. This feature is noted in the Boost.Asio 1.48 release notes:
Added support for a new
BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION
flag which switches the allocation of strand implementations to use a round-robin approach rather than hashing.
Given the above details, the following occurs when 1
is observed in the original code:
strand1
and strand2
have discrete implementationsio_service::poll_one()
executes the single handler that was posted directly into strand1
strand1
sets val
to 1
strand2
is enqueued and never invokedthe coroutine creation is deferred, as strand
's order of invocation guarantee prevents the coroutine from being created until after the previous handler that was posted into strand2
has executed:
given a strand object
s
, ifs.post(a)
happens-befores.dispatch(b)
, where the latter is performed outside the strand, thenasio_handler_invoke(a1, &a1)
happens-beforeasio_handler_invoke(b1, &b1)
.
On the other hand, when 3
is observed:
strand1
and strand2
, resulting in them using the same underlying strand implementationio_service::poll_one()
executes the single handler that was posted directly into strand1
strand1
sets val
to 1
strand2
is enqueued and never invokedboost::asio::spawn()
, setting val
to 3
, as strand2
can safely execute the coroutine while maintaining the guarantee of non-concurrent execution and order of handler invocationIf you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With