Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Work around celerybeat being a single point of failure

I'm looking for recommended solution to work around celerybeat being a single point of failure for celery/rabbitmq deployment. I didn't find anything that made sense so far, by searching the web.

In my case, once a day timed scheduler kicks off a series of jobs that could run for half a day or longer. Since there can only be one celerybeat instance, if something happens to it or the server that it's running on, critical jobs will not be run.

I'm hoping there is already a working solution for this, as I can't be the only one who needs reliable (clustered or the like) scheduler. I don't want to resort to some sort of database-backed scheduler, if I don't have to.

like image 686
Dmitry Grinberg Avatar asked Feb 15 '12 00:02

Dmitry Grinberg


1 Answers

There is an open issue in celery github repo about this. Don't know if they are working on it though.

As a workaround you could add a lock for tasks so that only 1 instance of specific PeriodicTask will run at a time.

Something like:

if not cache.add('My-unique-lock-name', True, timeout=lock_timeout):
    return

Figuring out lock timeout is well, tricky. We're using 0.9 * task run_every seconds if different celerybeats will try to run them at different times. 0.9 just to leave some margin (e.g. when celery is a little behind schedule once, then it is on schedule which would cause lock to still be active).

Then you can use celerybeat instance on all machines. Each task will be queued for every celerybeat instance but only one task of them will finish the run.

Tasks will still respect run_every this way - worst case scenario: tasks will run at 0.9*run_every speed.

One issue with this case: if tasks were queued but not processed at scheduled time (for example because queue processors was unavailable) - then lock may be placed at wrong time causing possibly 1 next task to simply not run. To go around this you would need some kind of detection mechanism whether task is more or less on time.

Still, this shouldn't be a common situation when using in production.

Another solution is to subclass celerybeat Scheduler and override its tick method. Then for every tick add a lock before processing tasks. This makes sure that only celerybeats with same periodic tasks won't queue same tasks multiple times. Only one celerybeat for each tick (one who wins the race condition) will queue tasks. In one celerybeat goes down, with next tick another one will win the race.

This of course can be used in combination with the first solution.

Of course for this to work cache backend needs to be replicated and/or shared for all of servers.

It's an old question but I hope it helps anyone.

like image 146
arkens Avatar answered Oct 24 '22 03:10

arkens