I have recently been seeing MySQL server has gone away
in my application logs for a daemon that I have running utilizing SQLAlchemy.
I wrap every database query or update in a decorator that should close all the sessions after finishing. In theory, that should also close the connection
.
My decorator looks like
def dbop(meth):
@wraps(meth)
def nf(self, *args, **kwargs):
self.session = self.sm()
res = meth(self, *args, **kwargs)
self.session.commit()
self.session.close()
return res
return nf
I also initialize the database at the top of my Python script with:
def initdb(self):
engine = create_engine(db_url)
Base.metadata.create_all(engine)
self.sm = sessionmaker(bind=engine,
autocommit=False,
autoflush=False,
expire_on_commit=False)
To my understanding, I am getting that error because my connection is timing out. Why would this be the case if I wrap each method in that decorator above? Is this because expire_on_commit
cause queries even after connection is closed and might reopen them? Is this because Base.metadata.create_all
causes SQL to be executed which opens a connection that isn't closed?
Your session is bound to an "engine", which in turn uses a connection pool. Each time SQLAlchemy requires a connection it checks one out from the pool, and if it is done with it, it is returned to the pool but it is not closed! This is a common strategy to reduce overhead from opening/closing connections. All the options you set above have only an impact on the session, not the connection!
By default, the connections in the pool are kept open indefinitely.
But MySQL will automatically close the connection after a certain amount of inactivity (See wait_timeout).
The issue here is that your Python process will not be informed by the MySQL server that the connection was closed if it falls into the inactivity timeout. Instead, the next time a query is sent to that connection, the Python will discover that the connection is no longer available. A similar thing can happen if the connection is lost due to other reasons, for example forced service restarts which don't wait for open connections to be cleanly closed (for example, using the "immediate" option in postgres restarts).
This is when you run into the exception.
SQLAlchemy gives you various strategies of dealing with this, which are well documented int the "Dealing with Disconnects" section as mentioned by @lukas-graf
If you jump through some hoops you can get a reference to the connection which is currently in use by the session. You could close it that way but I strongly recommend against this. Instead, refer to the "Dealing with Disconnects" session above, and let SQLAlchemy deal with this for you transparently. In your case, setting the pool_recycle option might solve your problem.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With