In Python 3.5+, I often end up with a situation where I have many nested coroutines just in order to call something that's deeply a coroutine, where the await
merely comes in a tail call in most of the functions, like this:
import asyncio
async def deep(time):
await asyncio.sleep(time)
return time
async def c(time):
time *= 2
return await deep(time)
async def b(time):
time *= 2
return await c(time)
async def a(time):
time *= 2
return await b(time)
async def test():
print(await a(0.1))
loop = asyncio.get_event_loop()
loop.run_until_complete(test())
loop.close()
Those functions a
, b
, and c
could be written as regular functions that return a coroutine, instead of as coroutines themselves, as follows:
import asyncio
async def deep(time):
await asyncio.sleep(time)
return time
def c(time):
time *= 2
return deep(time)
def b(time):
time *= 2
return c(time)
def a(time):
time *= 2
return b(time)
async def test():
print(await a(0.1))
loop = asyncio.get_event_loop()
loop.run_until_complete(test())
loop.close()
Which way is more Pythonic? Which way is more performant? Which way will be easier for others to maintain in the future?
As a performance test, I removed the await asyncio.sleep(time)
line from deep
and timed 1,000,000 iterations of await a(0.1)
. On my test system with CPython 3.5.2, the first version took around 2.4 second and the second version took around 1.6 seconds. So it looks like there may be a performance penalty to making everything coroutines, but it's certainly not an order of magnitude. Perhaps someone with more experience profiling Python code could create a proper benchmark and address the performance issue definitively.
Use first one: you not only show explicitly places where code can be suspended (where await
is placed) but also get all related benefits, such as tracebacks that show helpful execution flow.
To see the difference, change your deep
coroutine to throw some error:
async def deep(time):
await asyncio.sleep(time)
raise ValueError('some error happened')
return time
For first snippet you'll see this output:
Traceback (most recent call last):
File ".\tmp.py", line 116, in <module>
loop.run_until_complete(test())
File ".\Python36\lib\asyncio\base_events.py", line 466, in run_until_complete
return future.result()
File ".\tmp.py", line 113, in test
print(await a(0.1))
File ".\tmp.py", line 110, in a
return await b(time)
File ".\tmp.py", line 106, in b
return await c(time)
File ".\tmp.py", line 102, in c
return await deep(time)
File ".\tmp.py", line 97, in deep
raise ValueError('some error happened')
ValueError: some error happened
But for second snippet only:
Traceback (most recent call last):
File ".\tmp.py", line 149, in <module>
loop.run_until_complete(test())
File ".\Python36\lib\asyncio\base_events.py", line 466, in run_until_complete
return future.result()
File ".\tmp.py", line 146, in test
print(await a(0.1))
File ".\tmp.py", line 130, in deep
raise ValueError('some error happened')
ValueError: some error happened
As you can see first traceback helps you to see "real" (and helpful) flow of execution, while second one not.
First way to write code is also much better to maintain: imagine you once understood that b(time)
should also contain some async call such as await asyncio.sleep(time)
. In first snippet this call can be placed directly without any other changes, but in second one you'll have to rewrite many parts of your code.
This is one of the rare cases where "is it Pythonic?" is actually not an opinion-based question. Tail call optimization is officially un-Pythonic:
So let me defend my position (which is that I don't want [tail recursion elimination] in the language). If you want a short answer, it's simply unpythonic - the BDFL
(see also)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With