I have set up celery to work with my django application using their daemonization instructions (http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#daemonizing)
Here is my test task
@periodic_task(run_every=timedelta(seconds=10)) def debugger(): logger.info("Running debugger") raise Exception('Failed')
I need a way of knowing that this task (debugger) failed due to the exception. Celery's logging file prints the logger.info("running debugger") log, but it does not log the exception. Am I missing something, or am I supposed to find failed tasks some other way?
Celery is a task queue/job queue based on asynchronous message passing. It can be used as a background task processor for your application in which you dump your tasks to execute in the background or at any given moment. It can be configured to execute your tasks synchronously or asynchronously.
celery-logger is a python library for logging celery events such as tasks received, tasks failed/succeeded and tasks retried, along with task args.
The question:
I'd like Celery to catch exceptions and write them to a log file instead of apparently swallowing them...
The current top answer here is so-so for purposes of a professional solution. Many python developers will consider blanket error catching on a case-by-case basis a red flag. A reasonable aversion to this was well-articulated in a comment:
Hang on, I'd expect there to be something logged in the worker log, at the very least, for every task that fails...
Celery does catch the exception, it just isn't doing what the OP wanted it to do with it (it stores it in the result backend). The following gist is the best the internet has to offer on this problem. It's a little dated, but note the number of forks and stars.
https://gist.github.com/darklow/c70a8d1147f05be877c3
The gist is taking the failure case and doing something custom with it. This is a superset of the OP's problem. Here is how to adjust the solution in the gist to log the exception.
import logging logger = logging.getLogger('your.desired.logger') class LogErrorsTask(Task): def on_failure(self, exc, task_id, args, kwargs, einfo): logger.exception('Celery task failure!!!1', exc_info=exc) super(LogErrorsTask, self).on_failure(exc, task_id, args, kwargs, einfo)
You will still need to make sure all your tasks inherit from this task class, and the gist shows how to do this if you're using the @task
decorator (with the base=LogErrorsTask
kwarg).
The benefit of this solution is to not nest your code in any additional try-except contexts. This is piggybacking on the failure code path that celery is already using.
You can look at Celery User Guide:
from celery.utils.log import get_task_logger logger = get_task_logger(__name__) @app.task def div(): try: 1 / 0 except ZeroDivisionError: logger.exception("Task error")
From documentation for python logging module:
Logger.exception(msg, *args)
Logs a message with level ERROR on this logger. The arguments are interpreted as for debug(). Exception info is added to the logging message. This method should only be called from an exception handler.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With