I have a flask app that runs in multiple gunicorn sync processes on a server and uses TimedRotatingFileHandler to log to a file from within the flask application in each worker. In retrospect this seems unsafe. Is there a standard way to accomplish this in python (at high volume) without writing my own socket based logging server or similar? How do other people accomplish this? We do use syslog to aggregate across servers to a logging server already but I'd ideally like to persist the log on the app node first.
Thanks for your insights
Each of the workers is a UNIX process that loads the Python application. There is no shared memory between the workers. The suggested number of workers is (2*CPU)+1 . For a dual-core (2 CPU) machine, 5 is the suggested workers value.
Gunicorn should only need 4-12 worker processes to handle hundreds or thousands of requests per second. Gunicorn relies on the operating system to provide all of the load balancing when handling requests. Generally we recommend (2 x $num_cores) + 1 as the number of workers to start off with.
Yes, with 5 worker processes, each with 8 threads, 40 concurrent requests can be served.
In version 19.0, Gunicorn doesn't log by default in the console. To watch the logs in the console you need to use the option --log-file=- . In version 19.2, Gunicorn logs to the console by default again.
i use ConcurrentRotatingFileHandler
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With