Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

async trio way to solve Hettinger's example

Raymond Hettinger gave a talk on concurrency in python, where one of examples looked like that:

import urllib.request

sites = [
    'https://www.yahoo.com/',
    'http://www.cnn.com',
    'http://www.python.org',
    'http://www.jython.org',
    'http://www.pypy.org',
    'http://www.perl.org',
    'http://www.cisco.com',
    'http://www.facebook.com',
    'http://www.twitter.com',
    'http://www.macrumors.com/',
    'http://arstechnica.com/',
    'http://www.reuters.com/',
    'http://abcnews.go.com/',
    'http://www.cnbc.com/',
]

for url in sites:
    with urllib.request.urlopen(url) as u:
        page = u.read()
        print(url, len(page))

Essentially we go after these links and print amount of received bytes and it takes about 20 seconds to run.

Today I found trio library that has quite friendly api. But when I am trying to use it with this rather basic example I am failing to do it right.

first try (runs around the same 20 seconds):

import urllib.request
import trio, time

sites = [
    'https://www.yahoo.com/',
    'http://www.cnn.com',
    'http://www.python.org',
    'http://www.jython.org',
    'http://www.pypy.org',
    'http://www.perl.org',
    'http://www.cisco.com',
    'http://www.facebook.com',
    'http://www.twitter.com',
    'http://www.macrumors.com/',
    'http://arstechnica.com/',
    'http://www.reuters.com/',
    'http://abcnews.go.com/',
    'http://www.cnbc.com/',
]


async def show_len(sites):
    t1 = time.time()
    for url in sites:
        with urllib.request.urlopen(url) as u:
            page = u.read()
            print(url, len(page))
    print("code took to run", time.time() - t1)

if __name__ == "__main__":
    trio.run(show_len, sites)

and second one (same speed):

import urllib.request
import trio, time

sites = [
    'https://www.yahoo.com/',
    'http://www.cnn.com',
    'http://www.python.org',
    'http://www.jython.org',
    'http://www.pypy.org',
    'http://www.perl.org',
    'http://www.cisco.com',
    'http://www.facebook.com',
    'http://www.twitter.com',
    'http://www.macrumors.com/',
    'http://arstechnica.com/',
    'http://www.reuters.com/',
    'http://abcnews.go.com/',
    'http://www.cnbc.com/',
]

async def link_user(url):
    with urllib.request.urlopen(url) as u:
        page = u.read()
        print(url, len(page))

async def show_len(sites):
    t1 = time.time()
    for url in sites:
        await link_user(url)
    print("code took to run", time.time() - t1)


if __name__ == "__main__":
    trio.run(show_len, sites)

So how is this example should be dealt with using trio?

like image 430
Grail Finder Avatar asked Apr 10 '18 16:04

Grail Finder


People also ask

Is trio better than Asyncio?

Trio makes your code simpler In terms of the actual libraries, they're also very different. The main argument for trio is that it makes writing concurrent code much, much simpler than using asyncio.

How do you write asynchronous code in Python?

In Python, we can do this using the async keyword before the function definition. Execution of this coroutine results in a coroutine object.

How do I use async await in Python?

An async function uses the await keyword to denote a coroutine. When using the await keyword, coroutines release the flow of control back to the event loop. To run a coroutine, we need to schedule it on the event loop. After scheduling, coroutines are wrapped in Tasks as a Future object.

What is Python trio?

Trio is a modern Python library for writing asynchronous applications – that is, programs that want to do multiple things at the same time with parallelized I/O, like a web spider that fetches lots of pages in parallel, a web server juggling lots of simultaneous downloads… that sort of thing.


1 Answers

Two things:

First, the point of async is concurrency. It doesn't make things magically faster; it just provides a toolkit for doing multiple things at the same time (which might be faster than doing them sequentially). If you want things to happen concurrently then you need to request that explicitly. In trio, the way you do this is by creating a nursery, and then calling its start_soon method. For example:

async def show_len(sites):
    t1 = time.time()
    async with trio.open_nursery() as nursery:
        for url in sites:
            nursery.start_soon(link_user, url)
    print("code took to run", time.time() - t1)

But, if you try making this change and then running the code, you'll see that it still isn't faster. Why not? To answer this, we need to back up a little and understand the basic idea of "async" concurrency. In async code, we can have concurrent tasks, but trio actually only runs one of them at any given time. So you can't have two tasks actually doing something at the same time. BUT, you can have two (or more) tasks sitting and waiting at the same time. And in a program like this, most of the time spent doing the HTTP request is spent sitting and waiting for the response to come back, so that makes it possible to get a speedup by using concurrent tasks: we start all the tasks, and then each of them runs for a little while to send the request, stops to wait for the response, and then while it's waiting the next one runs for a while, sends its request, stops to wait for its response, and then while it's waiting the next one runs... you get the idea.

Well, actually, in Python, everything I've said so far applies to threads too, because the GIL means that even if you have multiple threads, only one can actually be running at a time.

The big difference between async concurrency and thread-based concurrency, in Python, is that in thread-based concurrency the interpreter gets to pause any thread at any time and switch to running another thread. In async concurrency, we only switch between tasks at specific points that are marked in the source code – that's what the await keyword is for, it shows you where a task might be paused to let another task run. The advantage of this is that it makes it much easier to reason about your program, because there are many fewer ways that different threads/tasks can get interleaved and accidentally interfere with each other. The downside is that it's possible to write code that doesn't use await at the right places, and that means that we can't switch to another task. In particular, if we stop and wait for something, but don't mark it with await, then our whole program will stop, not just the specific task that made the blocking call.

Now let's look at your example code again:

async def link_user(url):
    with urllib.request.urlopen(url) as u:
        page = u.read()
        print(url, len(page))

Notice that link_user doesn't use await at all. This is what's stopping our program from running concurrently: each time we call link_user, it sends the request, and then waits for the response, without letting anything else run.

You can see this more easily if you add some a print call at the beginning:

async def link_user(url):
    print("starting to fetch", url)
    with urllib.request.urlopen(url) as u:
        page = u.read()
        print("finished fetching", url, len(page))

It prints something like:

starting to fetch https://www.yahoo.com/
finished fetching https://www.yahoo.com/ 520675
starting to fetch http://www.cnn.com
finished fetching http://www.cnn.com 171329
starting to fetch http://www.python.org
finished fetching http://www.python.org 49239
[... you get the idea ...]

To avoid this, we need to switch to a HTTP library that's designed to work with trio. Hopefully in the future we'll have familiar options like urllib3 and requests. Until then, your best choice is probably asks.

So here's your code rewritten to run the link_user calls concurrently, and using an async HTTP library:

import trio, time
import asks
asks.init("trio")

sites = [
    'https://www.yahoo.com/',
    'http://www.cnn.com',
    'http://www.python.org',
    'http://www.jython.org',
    'http://www.pypy.org',
    'http://www.perl.org',
    'http://www.cisco.com',
    'http://www.facebook.com',
    'http://www.twitter.com',
    'http://www.macrumors.com/',
    'http://arstechnica.com/',
    'http://www.reuters.com/',
    'http://abcnews.go.com/',
    'http://www.cnbc.com/',
]

async def link_user(url):
    print("starting to fetch", url)
    r = await asks.get(url)
    print("finished fetching", url, len(r.content))

async def show_len(sites):
    t1 = time.time()
    async with trio.open_nursery() as nursery:
        for url in sites:
            nursery.start_soon(link_user, url)
    print("code took to run", time.time() - t1)


if __name__ == "__main__":
    trio.run(show_len, sites)

Now this should run faster than the sequential version.

There's more discussion of both of these points in the trio tutorial: https://trio.readthedocs.io/en/latest/tutorial.html#async-functions

You might also find this talk useful: https://www.youtube.com/watch?v=i-R704I8ySE

like image 180
Nathaniel J. Smith Avatar answered Sep 22 '22 05:09

Nathaniel J. Smith