I have a flask-based app. When I run it locally, I run it from the command line, but when I deploy it, I start it with gunicorn with multiple workers.
I want to use the logging
module to log to a file. The docs I've found for this are https://docs.python.org/3/library/logging.html and https://docs.python.org/3/howto/logging-cookbook.html .
I am confused over the correct way to use logging when my app may be launched with gunicorn. The docs address threading but assume I have control of the master process. Points of confusion:
Will logger = logging.getLogger('myapp')
return the same logger object in different gunicorn worker threads?
If I am attaching a logging FileHandler
to my logger in order to log to a file, how can I avoid doing this multiple times in the different workers?
My understanding - which may be wrong - is that if I just call logger.setLevel(logging.DEBUG)
, this will send messages via the root logger which may have a higher default logging level and may ignore debug messages, and so I also need to call logging.basicConfig(logging.DEBUG)
in order for my debug messages to get through. But the docs say not to call logging.basicConfig()
from a thread. How can I correctly set the root logging level when using gunicorn? Or do I not need to?
This is my typical Flask/Gunicorn setup. Note gunicorn is ran via supervisor.
wsgi_web.py. Note ProxyFix
to get a client's real IP address from Nginx.
from werkzeug.contrib.fixers import ProxyFix
from app import create_app
import logging
gunicorn_logger = logging.getLogger('gunicorn.error')
application = create_app(logger_override=gunicorn_logger)
application.wsgi_app = ProxyFix(application.wsgi_app, num_proxies=1)
Edit February 2020, for newer versions of werkzeug use the following and adjust the parameters to ProxyFix
as necessary:
from werkzeug.middleware.proxy_fix import ProxyFix
from app import create_app
import logging
gunicorn_logger = logging.getLogger('gunicorn.error')
application = create_app(logger_override=gunicorn_logger)
application.wsgi_app = ProxyFix(application.wsgi_app, x_for=1, x_host=1)
Flask application factory create_app
def create_app(logger_override=None):
app = Flask(__name__)
if logger_override:
# working solely with the flask logger
app.logger.handlers = logger_override.handlers
app.logger.setLevel(logger_override.level)
# OR, working with multiple loggers
# for logger in (app.logger, logging.getLogger('sqlalchemy')):
# logger.handlers = logger_override.handlers
# logger.setLevel(logger_override.level)
# more
return app
Gunicorn command (4th line) within supervisor conf, note the --log-level
parameter has been set to info
in this instance. Note X-REAL-IP
passed to access --access-logformat
[program:web]
directory = /home/paul/www/example
environment = APP_SETTINGS="app.config.ProductionConfig"
command = /home/paul/.virtualenvs/example/bin/gunicorn wsgi_web:application -b localhost:8000 --workers 3 --worker-class gevent --keep-alive 10 --log-level info --access-logfile /home/paul/www/logs/admin.gunicorn.access.log --error-logfile /home/paul/www/logs/admin.gunicorn.error.log --access-logformat '%%({X-REAL-IP}i)s %%(l)s %%(u)s %%(t)s "%%(r)s" %%(s)s %%(b)s "%%(f)s" "%%(a)s"'
user = paul
autostart=true
autorestart=true
Each worker is an isolated process with its own memory so you can't really share the same logger across different workers.
Your code runs inside these workers because the master process only cares about managing the workers.
The master process is a simple loop that listens for various process signals and reacts accordingly. It manages the list of running workers by listening for signals like TTIN, TTOU, and CHLD. TTIN and TTOU tell the master to increase or decrease the number of running workers.
In Gunicorn itself, there are two main run modes
So this is different from threading, this is multiprocessing.
However since Gunicorn 19, a threads option can be used to process requests in multiple threads. Using threads assumes use of the gthread worker.
With this in mind, the logging code will be written once and will be invoked multiple times each time a new worker is created. You can use Singelton pattern to ensure the same logger instance is being used inside the same worker.
For configuring the logger itself, you just need to follow the normal process of setting the root logger levels and the different loggers levels.
basicConfig() won't affect the root handler if it's already setup:
This function does nothing if the root logger already has handlers configured for it.
To set the level on root explicitly do
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(name)
Then the level can be set on the handler or logger level.
handler = logging.handlers.TimedRotatingFileHandler(log_path, when='midnight', backupCount=30)
handler.setLevel(min_level)
You can check this similar answer for logging related details Set logging levels
More Resources :
http://docs.gunicorn.org/en/stable/design.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With