I'm tring to write my custom async function for boost::asio as described here.
However I'm getting boost::coroutines::detail::forced_unwind exception on line with result.get
#include <boost/chrono.hpp>
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/steady_timer.hpp>
#include <iostream>
namespace asio = ::boost::asio;
template <typename Timer, typename Token>
auto my_timer (Timer& timer, Token&& token)
{
typename asio::handler_type<Token,
void (::boost::system::error_code const)>::type
handler (std::forward<Token> (token));
asio::async_result<decltype (handler)> result (handler);
timer.async_wait (handler);
return result.get (); // Got forced_unwind exception here.
}
int main ()
{
asio::io_service io;
asio::steady_timer timer (io, ::boost::chrono::seconds (1));
asio::spawn (io, [&] (asio::yield_context yield)
{
try {
std::cout << "my_timer enter\n";
my_timer (timer, yield);
std::cout << "my_timer returns\n";
}
catch (const boost::coroutines::detail::forced_unwind& e)
{
std::cout << "boost::coroutines::detail::forced_unwind\n";
}
}
);
io.run ();
}
Same code on Coliru
UPDATE:
The behavior exists on:
Darwin 14.0.0 (MacOS 10.10)
clang version 3.6.0 (trunk 216817) and gcc version 4.9.1 (MacPorts gcc49 4.9.1_1)
boost 1.57
and
Red Hat 6.5
gcc version 4.7.2 20121015 (Red Hat 4.7.2-5) (GCC)
boost 1.57 and 1.56
(the example code was trivially modified because gcc 4.7 does not support c++14 mode)
In short, you need to create a copy of handler, such as by posting it into the io_service
, before attempting to get the async_result
in order to keep the coroutine alive.
Boost.Asio prevents a non-resumable coroutine from indefinitely suspending by destroying the coroutine, resulting in the coroutine's stack to unwind. The coroutine object will throw boost::coroutines::detail::forced_unwind
during its destruction, causing the suspended stack to unwind. Asio accomplishes this by:
yield_context
CompletionToken maintains a weak_ptr
to the coroutine. handler_type::type
handler is constructed, it obtains a shared_ptr
for the coroutine via the CompletionToken's weak_ptr
. When the handler is passed as the completion handler to asynchronous operations, the handler and its shared_ptr
are copied. When the handler is invoked, it resumes the coroutine.async_result::get()
, the specialization will reset the coroutine shared_ptr
owned by the handler that was passed to async_result
during construction, and then yield the coroutine.Here is an attempt to illustrate the execution of the code. Paths in |
indicate the active stack, :
indicates the suspended stack, and arrows are used to indicate transfer of control:
boost::asio::io_service io_service;
boost::asio::spawn(io_service, &my_timer);
`-- dispatch a coroutine creator
into the io_service.
io_service.run();
|-- invoke the coroutine entry
| handler.
| |-- create coroutine
| | (count: 1)
| |-- start coroutine ----> my_timer()
: : |-- create handler1 (count: 2)
: : |-- create asnyc_result1(handler1)
: : |-- timer.async_wait(handler)
: : | |-- create handler2 (count: 3)
: : | |-- create async_result2(handler2)
: : | |-- create operation and copy
: : | | handler3 (count: 4)
: : | `-- async_result2.get()
: : | |-- handler2.reset() (count: 3)
| `-- return <---- | `-- yield
| `-- ~entry handler :
| (count: 2) :
|-- io_service has work (the :
| async_wait operation) :
| ...async wait completes... :
|-- invoke handler3 :
| |-- resume ----> |-- async_result1.get()
: : | |-- handler1.reset() (count: 1)
| `-- return <---- | `-- yield
| `-- ~handler3 : :
| | (count: 0) : :
| `-- ~coroutine() ----> | `-- throw forced_unwind
To fix this problem, handler
needs to be copied and invoked through asio_handler_invoke()
when it is time to resume the coroutine. For example, the following will post a completion handler1 into io_service
that invokes a copy of handler
:
timer.async_wait (handler);
timer.get_io_service().post(
std::bind([](decltype(handler) handler)
{
boost::system::error_code error;
// Handler must be invoked through asio_handler_invoke hooks
// to properly synchronize with the coroutine's execution
// context.
using boost::asio::asio_handler_invoke;
asio_handler_invoke(std::bind(handler, error), &handler);
}, handler)
);
return result.get ();
As demonstrated here, with this additional code, the output becomes:
my_timer enter
my_timer returns
1. The completion handler code can likely be cleaned up a bit, but as I was answering how to resume a Boost.Asio stackful coroutine from a different thread, I observed some compilers selecting the wrong asio_handler_invoke
hook.
This is a Boost Coroutine implementation detail.
As documented here: exceptions
⚠ Important
Code executed by coroutine-function must not prevent the propagation of the
detail::forced_unwind exception
. Absorbing that exception will cause stack unwinding to fail. Thus, any code that catches all exceptions must re-throw
any pendingdetail::forced_unwind
exception.
So, you're explicitly required to pass this exception through. Explicitly code the handler like:
Live On Coliru
try {
std::cout << "my_timer enter\n";
my_timer(timer, yield);
std::cout << "my_timer returns\n";
}
catch (boost::coroutines::detail::forced_unwind const& e)
{
throw; // required for Boost Coroutine!
}
catch (std::exception const& e)
{
std::cout << "exception '" << e.what() << "'\n";
}
This particular exception is an implementation detail and must
To be fair, this makes it unsafe to "naively" use existing (legacy) code that might not afford this guarantee. I think this is very strong reason for
centralized exception strategies (like using a Lippincott function for exception handlers)
Beware that last idea might be expressly prohibited in Coroutines too:
⚠ Important
Do not jump from inside a catch block and than re-throw the exception in another execution context.
Update: As @DeadMG just commented on that article, we can trivially transform the Lippincott function to a wrapping function, which could satisfy the requirements for Coroutine while centralizing exception handling.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With