Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Maximum clients reached on Heroku and Redistogo Nano

I am using celerybeat on Heroku with RedisToGo Nano addon

There is one web dyno and one worker dyno

The celerybeat worker is set to perform a task every minute.

The problem is: Whenever I deploy a new commit, dynos restart, and I get this error

2014-02-27T13:19:31.552352+00:00 app[worker.1]: Traceback (most recent call last):
2014-02-27T13:19:31.552352+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/celery/worker/consumer.py", line 389, in start
2014-02-27T13:19:31.552352+00:00 app[worker.1]:     self.reset_connection()
2014-02-27T13:19:31.552352+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/celery/worker/consumer.py", line 727, in reset_connection
2014-02-27T13:19:31.552352+00:00 app[worker.1]:     self.connection = self._open_connection()
2014-02-27T13:19:31.552352+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/celery/worker/consumer.py", line 792, in _open_connection
2014-02-27T13:19:31.552352+00:00 app[worker.1]:     callback=self.maybe_shutdown)
2014-02-27T13:18:23.864287+00:00 app[worker.1]:     self.on_connect()
2014-02-27T13:18:23.864287+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/redis/connection.py", line 263, in on_connect
2014-02-27T13:18:23.864287+00:00 app[worker.1]:     if nativestr(self.read_response()) != 'OK':
2014-02-27T13:18:23.864287+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/redis/connection.py", line 314, in read_response
2014-02-27T13:18:23.864287+00:00 app[worker.1]:     raise response
2014-02-27T13:18:23.864287+00:00 app[worker.1]: ResponseError: max number of clients reached
2014-02-27T13:19:31.552352+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 272, in ensure_connection
2014-02-27T13:19:31.552352+00:00 app[worker.1]:     interval_start, interval_step, interval_max, callback)
2014-02-27T13:19:31.552591+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/kombu/utils/__init__.py", line 218, in retry_over_time
2014-02-27T13:19:31.552591+00:00 app[worker.1]:     return fun(*args, **kwargs)
2014-02-27T13:19:31.552591+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 162, in connect
2014-02-27T13:19:31.552591+00:00 app[worker.1]:     return self.connection
2014-02-27T13:19:31.552591+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 617, in connection
2014-02-27T13:18:23.870811+00:00 app[worker.1]: [2014-02-27 13:18:23,870: ERROR/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...

and those logs go on endlessly. till I stop both dynos and restart them.

It has become a problem because it happens almost every time a new commit is deployed.

Any ideas why this is happening and how to solve this?

like image 804
JV. Avatar asked Feb 27 '14 13:02

JV.


People also ask

How many connections can Redis handle?

Large number of connections Individual ElastiCache for Redis nodes support up to 65,000 concurrent client connections.

How do I close Redis client connection?

Due to the single-threaded nature of Redis, it is not possible to kill a client connection while it is executing a command. From the client point of view, the connection can never be closed in the middle of the execution of a command.


1 Answers

The nano redistogo plan caps concurrent redis connections at 10.

The number of redis connects used will vary based on your front-end and celery worker settings. It sounds like you are using >= 5 redis connections for your production stack.

When you deploy new code, Heroku spins up an entirely new stack. This means you are using >= 10 redis connections at the time of deploy.

There are two ways to fix this:

  • Increase the maximum number of redistogo connections allowed, by upgrading to a larger plan ($$$)
  • Decrease the number of used connections for your stack (decrease celery concurrency or redis connections used by your web worker)

This is a simple matter of resource exhaustion. I would just pay for a larger RedisToGo plan.

like image 173
Winfield Avatar answered Nov 27 '22 23:11

Winfield