I have the following handler configuration for logging:
"handlers": {
'system_file': {
'level': 'DEBUG',
'class': 'logging.handlers.TimedRotatingFileHandler',
'filename': os.path.join(LOG_FOLDER, "all.log"),
'formatter': 'verbose',
'when': 'midnight',
'backupCount': '30',
}
}
Now based on this configuration my logs should get rotated every midnight, i.e it should create date wise logs.
In the all.log file, everything gets logged properly with correct timestamp, but when the rotation happens, I do not see all the logs in the backup log files (previous day log files).
For example:
Let's say today is 2019-10-29, the all.log file starts storing all the logs from 2019-10-29 00:00:00 to 2019-10-29 23:59:59.
The next day i.e. on 2019-10-30 (when rotation would have happened), when I go and check all.log.2019-10-29, it contains log from 2019-10-30 00:00:00 till 2019-10-30 01:00:00 and the all.log file starts storing logs of 2019-10-30 from 00:00:00 onwards. So basically all my backup files only contain log of the next day from 00:00:00-01:00:00.
all.log as on 2019-10-30
[DEBUG 2019-10-30 00:00:07,463 cron.py:44] .....
[DEBUG 2019-10-30 00:00:11,692 cron.py:44] ....
[DEBUG 2019-10-30 00:00:13,679 cron.py:44] ....
.
.
[DEBUG 2019-10-30 00:00:55,692 cron.py:44] ....
[DEBUG 2019-10-30 00:59:58,679 cron.py:44] ....
SERVER SHUTS DOWN HERE AT 1AM AND STARTS STORING LOGS WHEN IT RESTARTS
[DEBUG 2019-10-30 07:00:02,692 cron.py:44] ....
[DEBUG 2019-10-30 07:00:04,679 cron.py:44] ....
.
.
*Till current time*
all.log.2019-10-29
[DEBUG 2019-10-30 00:00:07,463 cron.py:44] .....
[DEBUG 2019-10-30 00:00:11,692 cron.py:44] ....
[DEBUG 2019-10-30 00:00:13,679 cron.py:44] ....
.
.
.
[DEBUG 2019-10-30 00:00:52,463 cron.py:44] .....
[DEBUG 2019-10-30 00:00:55,692 cron.py:44] ....
[DEBUG 2019-10-30 00:59:58,679 cron.py:44] ....
all.log.2019-10-28
[DEBUG 2019-10-29 00:00:04,463 cron.py:44] .....
[DEBUG 2019-10-29 00:00:09,692 cron.py:44] ....
[DEBUG 2019-10-29 00:00:11,679 cron.py:44] ....
.
.
.
[DEBUG 2019-10-29 00:00:49,463 cron.py:44] .....
[DEBUG 2019-10-29 00:00:52,692 cron.py:44] ....
[DEBUG 2019-10-29 00:59:56,679 cron.py:44] ....
I'm using a server which runs on a schedule, the server shuts down at 1AM and starts up at 7AM. This is the only reason I see why this weird behavior happens at 1AM, but I'm not able to figure out why this will cause a problem
Any help is appreciated.
I'm using Django 1.9.7 and Python 2.7.15
By default, the LOGGING setting is merged with Django's default logging configuration using the following scheme. If the disable_existing_loggers key in the LOGGING dictConfig is set to True (which is the dictConfig default if the key is missing) then all loggers from the default configuration will be disabled.
The default level is WARNING , which means that only events of this level and above will be tracked, unless the logging package is configured to do otherwise. Events that are tracked can be handled in different ways. The simplest way of handling tracked events is to print them to the console.
Sometimes you have to get your logging handlers to do their work without blocking the thread you're logging from. This is common in Web applications, though of course it also occurs in other scenarios. So, although not explicitly mentioned yet logging does seem to be blocking. For details see Python Docs.
As mentioned here and in many other places, you can't use this handler for concurrent logging (and that is the case with Django that runs in few threads).
Potentially, because of concurrency they can override theirself.
To log to a single destination from multiple processes, you can use one of the following approaches:
ConcurrentLogHandler
SysLogHandler
(or NTEventLogHandler
on Windows)SocketHandler
which sends the logs to a separate process for writing to fileQueueHandler
with a multiprocessing.Queue
, as outlined here.And if you really need to do this using base on time - you can redefine ConcurrentRotatingFileHandler
and _shouldRollover()
method with your own condition.
It is not perfect way, but it should work.
Also you can have a look to project on GitHub that is working on solving this issue:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With