Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best practice for rate limiting users of a REST API?

I am putting together a REST API and as I'm unsure how it will scale or what the demand for it will be, I'd like to be able to rate limit uses of it as well as to be able to temporarily refuse requests when the box is over capacity or if there is some kind of slashdotted scenario.

I'd also like to be able to gracefully bring the service down temporarily (while giving clients results that indicate the main service is offline for a bit) when/if I need to scale the service by adding more capacity.

Are there any best practices for this kind of thing? Implementation is Rails with mysql.

like image 733
frankodwyer Avatar asked Mar 05 '09 13:03

frankodwyer


People also ask

What is rate limiting in REST API?

Rate limits protect against that by curtailing the number of requests that come into your server. If your API is the target of a malicious DoS attack, for example, it can go down entirely. Rate limiting allows API developers to ensure an API will reject requests that exceed a set limit.

Should you rate limit API?

API limiting, which is also known as rate limiting, is an essential component of Internet security, as DoS attacks can tank a server with unlimited API requests. Rate limiting also helps make your API scalable. If your API blows up in popularity, there can be unexpected spikes in traffic, causing severe lag time.

What is the standard API rate limit?

Limits are placed on the number of API requests you may make using your API key. Rate limits may vary by service, but the defaults are: Hourly Limit: 1,000 requests per hour.


3 Answers

This is all done with outer webserver, which listens to the world (i recommend nginx or lighttpd).

Regarding rate limits, nginx is able to limit, i.e. 50 req/minute per each IP, all over get 503 page, which you can customize.

Regarding expected temporary down, in rails world this is done via special maintainance.html page. There is some kind of automation that creates or symlinks that file when rails app servers go down. I'd recommend relying not on file presence, but on actual availability of app server.

But really you are able to start/stop services without losing any connections at all. I.e. you can run separate instance of app server on different UNIX socket/IP port and have balancer (nginx/lighty/haproxy) use that new instance too. Then you shut down old instance and all clients are served with only new one. No connection lost. Of course this scenario is not always possible, depends on type of change you introduced in new version.

haproxy is a balancer-only solution. It can extremely efficiently balance requests to app servers in your farm.

For quite big service you end-up with something like:

  • api.domain resolving to round-robin N balancers
  • each balancer proxies requests to M webservers for static and P app servers for dynamic content. Oh well your REST API don't have static files, does it?

For quite small service (under 2K rps) all balancing is done inside one-two webservers.

like image 163
temoto Avatar answered Sep 30 '22 22:09

temoto


Good answers already - if you don't want to implement the limiter yourself, there are also solutions like 3scale (http://www.3scale.net) which does rate limiting, analytics etc. for APIs. It works using a plugin (see here for the ruby api plugin) which hooks into the 3scale architecture. You can also use it via varnish and have varnish act as a rate limiting proxy.

like image 41
steve Avatar answered Sep 30 '22 22:09

steve


I'd recommend implementing the rate limits outside of your application since otherwise the high traffic will still have the effect of killing your app. One good solution is to implement it as part of your apache proxy, with something like mod_evasive

like image 2
Denis Hennessy Avatar answered Sep 30 '22 23:09

Denis Hennessy