A finally block always executes, regardless of whether an exception is thrown.
The finally statement is always executed no matter what. This is useful if you need to run cleanup code regardless of what happens. For example, as a cleanup, you should always close a file no matter what. I hope you find it useful.
A finally block is always run, so the last thing to be returned in the function is whatever is returned in the finally block.
The finally block always executes when the try block exits. This ensures that the finally block is executed even if an unexpected exception occurs.
"Guaranteed" is a much stronger word than any implementation of finally
deserves. What is guaranteed is that if execution flows out of the whole try
-finally
construct, it will pass through the finally
to do so. What is not guaranteed is that execution will flow out of the try
-finally
.
A finally
in a generator or async coroutine might never run, if the object never executes to conclusion. There are a lot of ways that could happen; here's one:
def gen(text):
try:
for line in text:
try:
yield int(line)
except:
# Ignore blank lines - but catch too much!
pass
finally:
print('Doing important cleanup')
text = ['1', '', '2', '', '3']
if any(n > 1 for n in gen(text)):
print('Found a number')
print('Oops, no cleanup.')
Note that this example is a bit tricky: when the generator is garbage collected, Python attempts to run the finally
block by throwing in a GeneratorExit
exception, but here we catch that exception and then yield
again, at which point Python prints a warning ("generator ignored GeneratorExit") and gives up. See PEP 342 (Coroutines via Enhanced Generators) for details.
Other ways a generator or coroutine might not execute to conclusion include if the object is just never GC'ed (yes, that's possible, even in CPython), or if an async with
await
s in __aexit__
, or if the object await
s or yield
s in a finally
block. This list is not intended to be exhaustive.
A finally
in a daemon thread might never execute if all non-daemon threads exit first.
os._exit
will halt the process immediately without executing finally
blocks.
os.fork
may cause finally
blocks to execute twice. As well as just the normal problems you'd expect from things happening twice, this could cause concurrent access conflicts (crashes, stalls, ...) if access to shared resources is not correctly synchronized.
Since multiprocessing
uses fork-without-exec to create worker processes when using the fork start method (the default on Unix), and then calls os._exit
in the worker once the worker's job is done, finally
and multiprocessing
interaction can be problematic (example).
finally
blocks from running.kill -SIGKILL
will prevent finally
blocks from running. SIGTERM
and SIGHUP
will also prevent finally
blocks from running unless you install a handler to control the shutdown yourself; by default, Python does not handle SIGTERM
or SIGHUP
.finally
can prevent cleanup from completing. One particularly noteworthy case is if the user hits control-C just as we're starting to execute the finally
block. Python will raise a KeyboardInterrupt
and skip every line of the finally
block's contents. (KeyboardInterrupt
-safe code is very hard to write).finally
blocks won't run.The finally
block is not a transaction system; it doesn't provide atomicity guarantees or anything of the sort. Some of these examples might seem obvious, but it's easy to forget such things can happen and rely on finally
for too much.
Yes. Finally always wins.
The only way to defeat it is to halt execution before finally:
gets a chance to execute (e.g. crash the interpreter, turn off your computer, suspend a generator forever).
I imagine there are other scenarios I haven't thought of.
Here are a couple more you may not have thought about:
def foo():
# finally always wins
try:
return 1
finally:
return 2
def bar():
# even if he has to eat an unhandled exception, finally wins
try:
raise Exception('boom')
finally:
return 'no boom'
Depending on how you quit the interpreter, sometimes you can "cancel" finally, but not like this:
>>> import sys
>>> try:
... sys.exit()
... finally:
... print('finally wins!')
...
finally wins!
$
Using the precarious os._exit
(this falls under "crash the interpreter" in my opinion):
>>> import os
>>> try:
... os._exit(1)
... finally:
... print('finally!')
...
$
I'm currently running this code, to test if finally will still execute after the heat death of the universe:
try:
while True:
sleep(1)
finally:
print('done')
However, I'm still waiting on the result, so check back here later.
According to the Python documentation:
No matter what happened previously, the final-block is executed once the code block is complete and any raised exceptions handled. Even if there's an error in an exception handler or the else-block and a new exception is raised, the code in the final-block is still run.
It should also be noted that if there are multiple return statements, including one in the finally block, then the finally block return is the only one that will execute.
Well, yes and no.
What is guaranteed is that Python will always try to execute the finally block. In the case where you return from the block or raise an uncaught exception, the finally block is executed just before actually returning or raising the exception.
(what you could have controlled yourself by simply running the code in your question)
The only case I can imagine where the finally block will not be executed is when the Python interpretor itself crashes for example inside C code or because of power outage.
I found this one without using a generator function:
import multiprocessing
import time
def fun(arg):
try:
print("tried " + str(arg))
time.sleep(arg)
finally:
print("finally cleaned up " + str(arg))
return foo
list = [1, 2, 3]
multiprocessing.Pool().map(fun, list)
The sleep can be any code that might run for inconsistent amounts of time.
What appears to be happening here is that the first parallel process to finish leaves the try block successfully, but then attempts to return from the function a value (foo) that hasn't been defined anywhere, which causes an exception. That exception kills the map without allowing the other processes to reach their finally blocks.
Also, if you add the line bar = bazz
just after the sleep() call in the try block. Then the first process to reach that line throws an exception (because bazz isn't defined), which causes its own finally block to be run, but then kills the map, causing the other try blocks to disappear without reaching their finally blocks, and the first process not to reach its return statement, either.
What this means for Python multiprocessing is that you can't trust the exception-handling mechanism to clean up resources in all processes if even one of the processes can have an exception. Additional signal handling or managing the resources outside the multiprocessing map call would be necessary.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With