I am using python logging in my django application. A class that connects to a backend api initialises this logger with a filehandler if needed. The class gets instantiated everytime an api call is made. I have tried making sure additional handlers are not added every time, but
lsof | grep my.log
shows an increasing amount of handlers on my log file and after a while my server fails due to this open file limit.
self.logger = logging.getLogger("FPA")
try:
if self.logger.handlers[0].__class__.__name__=="FileHandler":
pass
except Exception, e:
print 'new filehandler added'+str(e)
ch = logging.FileHandler(FPA_LOG_TARGET)
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s - %(pathname)s @ line %(lineno)d")
ch.setFormatter(formatter)
self.logger.setLevel(logging.DEBUG)
self.logger.addHandler(ch)
I realise this may not be the best way to do this, but I have not found the error in my implementation so far.
I did not analysed if for a long time, but it looks like a concurrency problem.
Each process/thread is keeping it's own list of the file handles to the opened log files.
How to fix it? For the multithreaded code, make sure that there is a global dictionary where all handles are kept. For the multiprocess - I'm afraid I do not have an answer... each process is keeping it's own file-handles, maybe mapping it to the memory (memory mapped files could be an option), but I'm not sure that this is good solution - see this remark.
But the main question is why do you need to do such a thing.
First of all, you can use logging.conf
file to initialize all your loggers/handlers/formatters and when needed (e.g. specific loger is extensive and you want to log it to the separate file) add another logger with different filename. Which is quite sensible if you will add one logger per django app, by adding in the main __init__.py
of the app:
import logging
log = logging.getLogger(__name__)
and then import log
in the rest of the app code (views, models, etc.)
To use logging.conf
add following lines to your settings.py:
import os
import logging
DIRNAME = os.path.abspath(os.path.dirname(__file__))
logging.config.fileConfig(os.path.join(DIRNAME, 'logging.conf'))
Yes, it is manual, but you do not need to change code, but simply a config file.
Another approach (if you really want to have one file per logger type) is to have a separate process which will keep files open, accept connections from the application. Logging module documentation has a nice example of this method.
Last, but not least there are already some nice solutions which may be helpful. One, quite good, is to use django-sentry. This module can log all your exceptions, 404 (with extra middleware - included) and capture all logging (via included logging handler).
Provided UI will gave you a possibility to search all the logged messages, filter them by the severity and logging source. But this is not limited to those - you can simply add your own modules.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With