Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Handling (queuing) requests to a web service which calls rate limited external API

I have a web service exposed using the Flask framework.

This service uses as external API, which has a limitation on the number of times it should be called per second.

In a normal scenario, multiple calls to my API leads to multiple threads getting generated and the external API getting called without any control on the number of requests per second.

Is there a way in which I can queue requests to my web service and then call the external API in a throttled way.

Any other ideas are also welcomed.


Edit:

  1. I already know the rate to the external API ( 1 request per second)

  2. I am ok if the client requesting my API has to wait a little long (few seconds / minutes depending on the load I have) before they get the results.

  3. I don't want my API clients to get failed results. i.e. I don't want them to call again and again. If the external API is already being accessed by me at the max rate possible, the requests to my API should get queued and processed when the rate comes down.

  4. I read about Celery and Redis. Will I be able to queue the web service calls in to these queues and process them later ?

like image 922
Amit Tomar Avatar asked Feb 08 '16 05:02

Amit Tomar


People also ask

What is an API call limit?

In the API Console, there is a similar quota referred to as Requests per 100 seconds per user. By default, it is set to 100 requests per 100 seconds per user and can be adjusted to a maximum value of 1,000. But the number of requests to the API is restricted to a maximum of 10 requests per second per user.

What does API rate limit exceeded?

A rate limit is the number of API calls an app or user can make within a given time period. If this limit is exceeded or if CPU or total time limits are exceeded, the app or user may be throttled. API requests made by a throttled user or app will fail.


Video Answer


1 Answers

One way is to wrap the request, so that rate limit failures will result in an exponential backoff until an acceptable rate is found.

In the example below, it will keep retrying the request until it succeeds, waiting longer and longer between requests each time it fails, up to a maximum number of allowed retries (n_max). The number of seconds it waits to retry the request increases exponentially (1, 2, 4, 8, 16, 32, etc).

Here is an example that uses requests. The specifics of catching the errors and recognizing rate limit errors will depend on the library your using to make the requests and the type of errors the external api returns, but the backoff algorithm should be the same.

def call_backoff(request_func, n_max=10):
    n = 0
    while n < n_max:
        error = False
        try:
            response = request_func()
        except errors.HttpError as e:
            # You can test here if this is a rate error
            # and not some other error.  You can decide how to handle
            # other errors.
            error = True
            if not_a_rate_error:
                raise

         if response.status_code == 429:
             error = True

         if not error:
             return response

         milli = random.randint(1, 1000)
         secs = (2 ** n) + milli / 1000.0
         time.sleep(secs)
         n += 1

    # You can raise an error here if you'd like
    return None
like image 177
Brendan Abel Avatar answered Oct 12 '22 09:10

Brendan Abel