I have a Flask application with Redis worker (worker.py) running in the background. There is a separate module (selenium.py) that just runs Selenium to do some tasks. This module gets queued via Redis in order to perform jobs. I'm trying to log all processes into one file. However, Redis prints all logs from (selenium.py) into the console instead of putting them into the log file that I have designated.
The log configuration is in the main Flask file (app.py):
import logging
from logging.handlers import RotatingFileHandler
import logging.config
from rq import Queue
from rq.job import Job
from worker import conn
logging.config.fileConfig('logging.conf')
logger = logging.getLogger('app')
logger.warning('Started app.py!')
worker.py:
import os
import redis
from rq import Worker, Queue, Connection
listen = ['default']
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
conn = redis.from_url(redis_url)
if __name__ == '__main__':
with Connection(conn):
worker = Worker(list(map(Queue, listen)))
worker.work()
selenium.py
import logging
logger = logging.getLogger('app')
logger.info('Selenium ran successfully!')
...Selenium Code Here...
logging.conf
[loggers]
keys=root
[handlers]
keys=logfile
[formatters]
keys=logfileformatter
[logger_root]
level=INFO
handlers=logfile
[handler_logfile]
class=handlers.RotatingFileHandler
level=NOTSET
args=('logs/my_log.log','a',100000,50)
formatter=logfileformatter
[formatter_logfileformatter]
format=%(asctime)s %(name)-12s: %(levelname)s %(message)s
Console Output (for worker.py):
Selenium ran successfully!
Log Output:
Started app.py!
Expected Log Output (what I want):
Started app.py!
Selenium ran successfully!
So given Flask and Redis. How do I combine my logs into one output file with Python's logging system?
Here is my proposed solution to this.
In Flask (or app.py):
import logging.config
from logsetup import LOGGING_CONFIG
logging.config.dictConfig(LOGGING_CONFIG)
logger = logging.getLogger(__name__)
logger.warning('Started app.py!')
In logsetup.py (here is where you can adjust format and add handlers for different files):
LOGGING_CONFIG = {
'version': 1,
'disable_existing_loggers': True,
'loggers': {
'': { # root logger
'level': 'INFO',
'handlers': ['info_rotating_file_handler'],
'propagate': False
}
},
'handlers': {
'info_rotating_file_handler': {
'level': 'INFO',
'formatter': 'info',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/info.log',
'mode': 'a',
'maxBytes': 100000,
'backupCount': 20
}
},
'formatters': {
'info': {
'format': '[%(asctime)s] %(levelname)s [%(name)s::%(module)s.%(funcName)s:%(lineno)d] %(message)s',
'datefmt': '%m-%d-%Y@%H:%M:%S'
}
}
}
Now here's the magic
For modules imported by Flask and run by Flask, go to the modules and insert this code:
import logging
logger = logging.getLogger(__name__)
logger.info('Started MODULE_NAME_HERE.py!') #MODULE_NAME_HERE is just a placeholder. Such as your Scraper.py
And for modules that are running apart from Flask, such as your worker.py:
import logging
import logging.config
from logsetup import LOGGING_CONFIG
logging.config.dictConfig(LOGGING_CONFIG)
logger = logging.getLogger(__name__)
logger.info('Started worker.py!')
I do not claim that this is the best solution for your problem, but I am not sure if it is or not. I just believe this is at least a decent way to run many Python apps in parallel while logging to one central file.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With