Does Python's logging
library provide serialised logging for two (or more) separate python processes logging to the same file? It doesn't seem clear from the docs (which I have read).
If so, what about on completely different machines (where the shared log file would exist on an NFS export accessible by both).
The multiprocessing module has its own logger with the name “multiprocessing“. This logger is used within objects and functions within the multiprocessing module to log messages, such as debug messages that processes are running or have shutdown. We can get this logger and use it for logging.
Multiprocessing with logging module — QueueHandlerAlthough logging module is thread-safe, it's not process-safe. If you want multiple processes to write to the same log file, then you have to manually take care of the access to your file.
No it is not supported. From python logging cookbook:
Although logging is thread-safe, and logging to a single file from multiple threads in a single process is supported, logging to a single file from multiple processes is not supported, because there is no standard way to serialize access to a single file across multiple processes in Python.
Afterwards the cookbook suggests to use a single socket-server process that handles the logs and the other processes sending log messages to it. There is a working example of this apporach in the section Sending and Receiving logging events across a network.
One grotty solution to this problem is to create a logging process which listens on a socket, on a single thread, which just outputs whatever it receives
The point is to hijack the socket queue as an arbitration mechanism.
#! /usr/bin/env python
import sys
import socket
import argparse
p = argparse.ArgumentParser()
p.add_argument("-p", "--port", help="which port to listen on", type=int)
p.add_argument("-b", "--backlog", help="accept backlog size", type=int)
p.add_argument("-s", "--buffersize", help="recv buffer size", type=int)
args = p.parse_args()
port = args.port if args.port else 1339
backlog = args.backlog if args.backlog else 5
size = args.buffersize if args.buffersize else 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('', port))
s.listen(backlog)
print "Listening on port ", port, 'backlog size', backlog, 'buffer size', size, '\n'
while 1:
try:
(client, address) = s.accept()
data = client.recv(size)
print data
except:
client.close()
And to test it:
#! /usr/bin/env python
import sys
import socket
import argparse
p = argparse.ArgumentParser()
p.add_argument("-p", "--port", help="send port", action='store', default=1339, type=int)
p.add_argument("text", help="text to send")
args = p.parse_args()
if not args.quit and not args.text:
p.print_help()
else:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('', args.port))
s.send(args.text)
except:
s.close()
Then use it like this:
stdbuf -o L ./logger.py -b 10 -s 4096 >>logger.log 2>&1 &
and monitor recent activity with:
tail -f logger.log
Each logging entry from any given process will be emitted atomically. Adding this into the standard logging system shouldn't be too hard. Using sockets means that multiple machines can also target a single log, hosted on a dedicated machine.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With