Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python Requests non-blocking? [duplicate]

Possible Duplicate:
Asynchronous Requests with Python requests

Is the python module Requests non-blocking? I don't see anything in the docs about blocking or non-blocking.

If it is blocking, which module would you suggest?

like image 592
Jeff Avatar asked Jan 09 '13 20:01

Jeff


People also ask

Is Aiohttp better than requests?

get is that requests fetches the whole body of the response at once and remembers it, but aiohttp doesn't. aiohttp lets you ignore the body, or read it in chunks, or read it after looking at the headers/status code. That's why you need to do a second await : aiohttp needs to do more I/O to get the response body.

Are requests in Python blocking?

Like urllib2 , requests is blocking. But I wouldn't suggest using another library, either. The simplest answer is to run each request in a separate thread.

Are Python requests asynchronous?

Simply using the python built-in library asyncio is sufficient enough to perform asynchronous requests of any type, as well as providing enough fluidity for complex and usecase specific error handling.

Are Python requests synchronous?

Yes, it is synchronous.


1 Answers

Like urllib2, requests is blocking.

But I wouldn't suggest using another library, either.

The simplest answer is to run each request in a separate thread. Unless you have hundreds of them, this should be fine. (How many hundreds is too many depends on your platform. On Windows, the limit is probably how much memory you have for thread stacks; on most other platforms the cutoff comes earlier.)

If you do have hundreds, you can put them in a threadpool. The ThreadPoolExecutor Example in the concurrent.futures page is almost exactly what you need; just change the urllib calls to requests calls. (If you're on 2.x, use futures, the backport of the same packages on PyPI.) The downside is that you don't actually kick off all 1000 requests at once, just the first, say, 8.

If you have hundreds, and they all need to be in parallel, this sounds like a job for gevent. Have it monkeypatch everything, then write the exact same code you'd write with threads, but spawning greenlets instead of Threads.

grequests, which evolved out of the old async support directly in requests, effectively does the gevent + requests wrapping for you. And for the simplest cases, it's great. But for anything non-trivial, I find it easier to read explicit gevent code. Your mileage may vary.

Of course if you need to do something really fancy, you probably want to go to twisted, tornado, or tulip (or wait a few months for tulip to be part of the stdlib).

like image 151
abarnert Avatar answered Sep 22 '22 05:09

abarnert