We have a slightly unreliable database server, for various reasons, and as a consequence sometimes the database connections used by my application vanish out from under it. The connections are SQLAlchemy 0.6.5 connections to a PostgreSQL db in a Pylons 1.0 web runtime.
What I want is some way to catch most of these without a user-visible error; ideally, I'd test the connection at the pool level before returning it from the engine. I control the creation of the engine, so I'm okay there.
What's the best (most idomatic / cleanest) way to accomplish this? I realize that there will always be the possibility of the connection dying between the check and the usage, but that's going to be pretty rare in this environment, and is therefore not a concern to me.
SQLAlchemy includes several connection pool implementations which integrate with the Engine . They can also be used directly for applications that want to add pooling to an otherwise plain DBAPI approach.
you call close(), as documented. dispose() is not needed and in fact calling dispose() explicitly is virtually never needed for normal SQLAlchemy usage.
# Pool size is the maximum number of permanent connections to keep. pool_size=5, # Temporarily exceeds the set pool_size if no connections are available.
The create_engine() method of sqlalchemy library takes in the connection URL and returns a sqlalchemy engine that references both a Dialect and a Pool, which together interpret the DBAPI's module functions as well as the behavior of the database.
You could use a pool listener:
class ConnectionChecker(sqlalchemy.interfaces.PoolListener):
def checkout(self, dbapi_con, con_record, con_proxy):
if not is_valid_connection(dbapi_con):
# a new connection will be used
raise sqlalchemy.exc.DisconnectionError
Left for you is how to implement is_valid_connection
for your use case.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With