Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

concurrency::task destructor causes call to abort in valid use-case

Could you please tell me if the approach I use to handle the use-case is invalid and if so, what is the right way to handle:

task<int> do_work(int param)
{
    // runs some work on a separate thread, returns task with result or throws exception on failure
}

void foo()
{
    try
    {
        auto result_1 = do_work(10000);
        auto result_2 = do_work(20000);

        // do some extra work

        process(result_1.get(), result_2.get());
    }
    catch (...)
    {
        // logs the failure details
    }
}

So the code tries to execute two jobs in parallel and then process the results. If one of the jobs throws an exception, then call task::get will re-throw the exception. The issue happens if both tasks throw an exception. In this case the first call to task::get will cause stack unwind, so the destructor of the second task will be called and will in turn cause one more exception to be re-thrown during stack unwind which causes 'abort' to be called.

This approach seemed completely valid to me until I faced with the issue.

like image 853
topoden Avatar asked Oct 20 '22 05:10

topoden


1 Answers

In simple words you have an un-handled (unobserved) exception as the exception thrown in one of your tasks does not get caught by the task, one of its continuations, or the main app, because the exception re-thrown from task::get for the first task unwinds the stack before the call to task::get happens for the second task.

A more simplified code shows that std::terminate is called because the exception thrown in the task does not get handled. Uncommenting the result.get() will prevent the call to std::terminate, as task::get will re-throw the exception.

#include <pplx/pplx.h>
#include <pplx/pplxtasks.h>
#include <iostream>

int main(int argc, char* argv[])
{
    try
    {
        auto result = pplx::create_task([] ()-> int
        {
            throw std::exception("task failed");
        });

        // actually need wait here till the exception is thrown, e.g.
        // result.wait(), but this will re-throw the exception making this a valid use-case

        std::cout << &result << std::endl; // use it
        //std::cout << result.get() << std::endl;
    }
    catch (std::exception const& ex)
    {
        std::cout << ex.what() << std::endl;
    }

    return 0;
}

have a look at the suggestion in pplx::details::_ExceptionHandler::~_ExceptionHolder()

//pplxwin.h
#define _REPORT_PPLTASK_UNOBSERVED_EXCEPTION() do { \
    __debugbreak(); \
    std::terminate(); \
} while(false)


//pplxtasks.h
pplx::details::_ExceptionHandler::~_ExceptionHolder()
{
    if (_M_exceptionObserved == 0)
    {
        // If you are trapped here, it means an exception thrown in task chain didn't get handled.
        // Please add task-based continuation to handle all exceptions coming from tasks.
        // this->_M_stackTrace keeps the creation callstack of the task generates this exception.
        _REPORT_PPLTASK_UNOBSERVED_EXCEPTION();
    }
}

In the original code the first call to task::get raises the exception thrown in that task, which obviously prevents the second call to task::get so the exception of the second task does not get handled (remains "unobserved").

the destructor of the second task will be called and will in turn cause one more exception to be re-thrown during stack unwind which causes 'abort' to be called.

The destructor of the second task does not re-throw the exception it just calls std::terminate() (which calls std::abort())

see. Exception Handling in the Concurrency Runtime

like image 97
null Avatar answered Oct 24 '22 04:10

null