How does one turn on celery logging programmatically?
From the terminal, this works fine:
celery worker -l DEBUG
When I call get_task_logger(__name__).debug('hello')
, I can see the message come up in the terminal. (stdout and stderr are being displayed) I can even import logging
and call logger.info('hi')
and see that too. (both work)
However, while developing a task, I prefer to use a test module and call the task function directly rather than firing up a whole worker. But I can't see the log messages. I understand that Celery is redirecting everything to its internal apparatus, but I want to see the log messages on the stdout too.
How do I tell Celery to send a copy of the log messages back to stdout?
I've read a bunch of online articles about logging but it seems that a number of logging-related configuration vars from celery have been deprecated and it's unclear to me from the docs what is the supported path today.
Here is an example module that creates a celery object and attempts to log output. Nothing shows in the terminal.
example mymodule.py
from celery import Celery
import logging
from celery.utils.log import get_task_logger
app = Celery('test')
app.config_from_object('myfile', True)
get_task_logger(__name__).warn('hello world')
logging.getLogger(__name__).warn('hello world 2')
EDIT I know that I can add a handler to redirect some of the output back to the terminal by adding a handler
log = get_task_logger(__name__)
h = logging.StreamHandler(sys.stdout)
log.addHandler(h)
But is there a "Celery way" to do this? Maybe one that lets me also have the Celery formatted lines of text.
[2014-03-02 15:51:32,949: WARNING] hello world
Only error logs should go to stderr. This is a pretty common convention supported by the 12factor design principles and the original intent behind io streams in operating systems. With all logs dumped to stderr, it's significantly more difficult to sift through output from using pipes.
Celery have specific option -f --logfile which you can use: -f LOGFILE, --logfile=LOGFILE Path to log file.
Specifically, Redis is used to store messages produced by the application code describing the work to be done in the Celery task queue. Redis also serves as storage of results coming off the celery queues which are then retrieved by consumers of the queue.
I have been looking at the same issue...
What seems to work best is to use the signal handler, according to http://docs.celeryproject.org/en/latest/userguide/signals.html#after-setup-logger
In your celery.py
file use:
from celery.signals import after_setup_logger
import logging
@after_setup_logger.connect()
def logger_setup_handler(logger, **kwargs ):
my_handler = MyLogHandler()
my_handler.setLevel(logging.DEBUG)
my_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') #custom formatter
my_handler.setFormatter(my_formatter)
logger.addHandler(my_handler)
logging.info("My log handler connected -> Global Logging")
if __name__ == '__main__':
app.start()
then you can define MyLogHandler()
as you wish.
To send the logs to STDOUT you should also be able to use (I have not tested it):
my_handler = logging.StreamHandler(sys.stdout)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With