Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Exceptions in C++ destructors if there is no uncaught_exception

People have argued pretty strongly against throwing exceptions from destructors. Take this answer as an example. I wonder whether std::uncaught_exception() can be used to portably detect whether we are in the process of unwinding the stack due to some other exception.

I find myself deliberately throwing exceptions in destructors. To mention two possible use cases:

  • Some resource cleanup which involves flushing buffers, so that failure likely signifies truncated output.
  • Destruction of an object holding a std::exception_ptr which might contain an exception encountered in a different thread.

Simply ignoring these exceptional situations feels plain wrong. And chances are that by throwing an exception some exception handler might be able to provide more useful context information than if the destructor itself were writing to std::cerr. Furthermore, throwing exceptions for all failed assertions is an important part of my unit testing approach. An error message followed by an ignored error condition wouldn't work in that case.

So my question is, is it OK to throw exceptions except when another exception is being processed, or is there a reason not to do that?

To put this in code:

Foo::~Foo() {
  bool success = trySomeCleanupOperation();
  if (!success) {
    if (std::uncaught_exception())
      std::cerr << "Error in destructor: " << errorCode << std::endl;
    else
      throw FooOperationFailed("Error in destructor", errorCode);
  }
}

As far as I can tell, this should be safe and in many cases better than not throwing an exception at all. But I'd like to verify that.

like image 467
MvG Avatar asked Mar 05 '13 11:03

MvG


1 Answers

Herb Sutter has written on the subject: http://www.gotw.ca/gotw/047.htm

His conclusion is to never throw from a destructor, always report the error using the mechanism that you would use in the case where you can't throw.

The two reasons are:

  • it doesn't always work. Sometimes uncaught_exception returns true and yet it is safe to throw.
  • it's bad design to have the same error reported in two different ways, both of which the user will have to account for if they want to know about the error.

Note that for any given reusable piece of code, there is no way to know for sure that it will never be called during stack unwinding. Whatever your code does, you can't be certain that some user of it won't want to call it from a destructor with a try/catch in place to handle its exceptions. Therefore, you can't rely on uncaught_exception always returning true if it's safe to throw, except maybe by documenting the function, "must not be called from destructors". If you resorted to that then all callers would also have to document their functions, "must not be called from destructors" and you have an even more annoying restriction.

Aside from anything else, the nothrow guarantee is valuable to users - it helps them write exception-safe code if they know that a particular thing that they do won't throw.

One way out is to give your class a member function close that calls trySomeCleanupOperation and throws if it fails. Then the destructor calls trySomeCleanupOperation and logs or suppresses the error, but doesn't throw. Then users can call close if they want to know whether their operation succeeds or not, and just let the destructor handle it if they don't care (including the case where the destructor is called as part of stack unwinding, because an exception was thrown before getting to the user's call to close). "Aha!", you say, "but that defeats the purpose of RAII because the user has to remember to call close!". Yes, a bit, but what's in question isn't whether RAII can do everything you want. It can't. What's in question is whether it consistently does less than you'd like (you'd like it to throw an exception if trySomeCleanupOperator fails), or does less surprisingly when used during stack unwinding.

Furthermore, throwing exceptions for all failed assertions is an important part of my unit testing approach

That's probably a mistake - your unit testing framework should be capable of treating a terminate() as a test failure. Suppose that an assert fails during stack unwinding - surely you want to record that, but you can't do so by throwing an exception, so you've painted yourself into a corner. If your assertions terminate then you can detect them as terminations.

Unfortunately if you terminate then you can't run the rest of the tests (at least, not in that process). But if an assertion fails then generally speaking your program is in an unknown and potentially unsafe state. So once you've failed an assertion you can't rely on doing anything else in that process anyway. You could consider designing your test framework to use more than one process, or just accept that a sufficiently severe test failure will prevent the rest of the tests running. Externally to the test framework, you can consider that your test run has three possible outcomes "all passed, something failed, tests crashed". If the test run fails to complete then you don't treat it as a pass.

like image 92
Steve Jessop Avatar answered Sep 17 '22 04:09

Steve Jessop