Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Cancel an already executing task in Python RQ?

I am using http://python-rq.org/ to queue and execute tasks on Heroku worker dynos. These are long-running tasks and occasionally I need to cancel them in mid-execution. How do I do that from Python?

from redis import Redis
from rq import Queue
from my_module import count_words_at_url

q = Queue(connection=Redis())
result = q.enqueue(
             count_words_at_url, 'http://nvie.com')

and later in a separate process I want to do:

from redis import Redis
from rq import Queue
from my_module import count_words_at_url

q = Queue(connection=Redis())
result = q.revoke_all() # or something

Thanks!

like image 771
Charles Offenbacher Avatar asked May 28 '13 13:05

Charles Offenbacher


People also ask

What does cancel () method do in Python scheduler?

The cancel () method of the scheduler class from the Python Standard Library module sched, removes an already scheduled event from a scheduler instance. The cancel () method does not remove an event from the queue once the scheduler is started by calling the run () method.

How long are failed jobs kept in RQ?

By default, failed jobs are kept for 1 year. You can change this by specifying failure_ttl (in seconds) when enqueueing jobs. If you need to manually requeue failed jobs, here’s how to do it: Starting from version 1.5.0, RQ also allows you to automatically retry failed jobs. RQ also provides a CLI tool that makes requeuing failed jobs easy.

What happens if a job fails during execution?

If a job fails during execution, the worker will put the job in a FailedJobRegistry. On the Job instance, the is_failed property will be true. FailedJobRegistry can be accessed through queue.failed_job_registry. By default, failed jobs are kept for 1 year. You can change this by specifying failure_ttl (in seconds) when enqueueing jobs.


2 Answers

If you have the job instance at hand simply

job.cancel()

Or if you can determine the hash:

from rq import cancel_job
cancel_job('2eafc1e6-48c2-464b-a0ff-88fd199d039c')

http://python-rq.org/contrib/

But that just removes it from the queue; I don't know that it will kill it if already executing.

You could have it log the wall time then check itself periodically and raise an exception/self-destruct after a period of time.

For manual, ad-hoc style, death: If you have redis-cli installed you can do something drastic like flushall queues and jobs:

$ redis-cli
127.0.0.1:6379> flushall
OK
127.0.0.1:6379> exit

I'm still digging around the documentation to try and find how to make a precision kill.

Not sure if that helps anyone since the question is already 18 months old.

like image 180
John Mee Avatar answered Sep 21 '22 04:09

John Mee


I think the most common solution is to have the worker spawn another thread/process to do the actual work, and then periodically check the job metadata. To kill the task, set a flag in the metadata and then have the worker kill the running thread/process.

like image 30
sheridp Avatar answered Sep 22 '22 04:09

sheridp