My setup includes Load Balancer (haproxy) with two nginx servers running Django. Server 2 works fine but sometimes server 1 will start crashing and log will be full of
*** uWSGI listen queue of socket ":8000" (fd: 3) full !!! (101/100) ***
message.
How do I go about resolving this issue?
In addition to the caching framework, uWSGI includes a shared queue. At the low level it is a simple block-based shared array, with two optional counters, one for stack-style, LIFO usage, the other one for FIFO.
Mandatory options. By default uWSGI does not enable threading support within the Python interpreter core. This means it is not possible to create background threads from Python code.
Post-buffering mode (uWSGI >= 2.0. This means that as soon as the uwsgi packet (read: the request headers) is parsed, it is forwarded to the backend/backends. Now, if your web-proxy is a streaming-one too (like apache, or the uWSGI http router), your app could be blocked for ages in case of a request with a body.
Remember: lazy-apps is different from lazy, the first one only instructs uWSGI to load the application one time per worker, while the second is more invasive (and generally discouraged) as it changes a lot of internal defaults.
Your listen queue is full. When you run uwsgi, pass it --listen 1024
to increase the queue to 1024.
Note that a larger queue makes you more susceptible to a DDoS attack.
You may also need to increase net.core.somaxconn
sysctl -w net.core.somaxconn=65536
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With