Recently, my Django app has been crashing frequently due to database connection errors:
OperationalError: FATAL: sorry, too many clients already
When I go into the app database, I see that indeed there are nearly 100 open connections, all with the same query (executed by the Django ORM) and all in the idle
state.
I have been manually doing SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'idle';
but I am perplexed as to why this is happening. Can anyone shed any insight into what is happening here?
My Django database settings do not stray from the defaults (I have not defined CONN_MAX_AGE
or anything of that nature).
What could cause this? I'm not doing any advanced Django queries. Is this something that can be solved with a Django setting or perhaps some PostgreSQL configuration? Any advice is appreciated.
Kill an Idle Connection: We have to provide the process 'id' within the query in a terminate function. >> SELECT pg_terminate_backend(7408); The process has been magnificently killed.
idle: This indicates that the connection is idle and we need to track these connections based on the time that they have been idle. idle in transaction: This indicates the backend is in a transaction, but it is currently not doing anything and could be waiting for an input from the end user.
Django officially supports the following databases: PostgreSQL. MariaDB. MySQL.
apparently you don't disconnect. Using db.close_connection()
after query finishes would help. Also If I get it right CONN_MAX_AGE
to some short value could help. And consider using some session pooler, eg pgbouncer for django connections. This way if you have too many connections it will wait (or reuse previous, depending on config) instead of aborting execution with error...
update: explanation why I propose it
from docs
each thread maintains its own connection, your database must support at least as many simultaneous connections as you have worker threads.
So if you have more threads then postgres max_connections
, you get mentioned error. Each thread can reuse connection if CONN_MAX_AGE has not passed. Your setting is 0, so connection should be closed after query completion, but you see 100 idle connection. So they are not closing. The big number of connection means that they are not reused either (logic: if you would have 100 parallel queries they would not all be idle, and if you have so many, they are not reused - opening new). So I think django does not close them as prommised - so CONN_MAX_AGE set to 0 does not work in your code. So I propose using db.close_connection()
to force the disconnect and setting CONN_MAX_AGE to some small value can change behaviour.
Best guess without more details, but if it's the same query, and they're all idle, it feels like you're doing some kind of async programming, and you've hit a deadlock, and specifically your deadlock is manifesting itself in db connections getting saturated.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With