I'm writing a small database adapter in Python, mostly for fun. I'm trying to get the code to gracefully recover from a situation where the MySQL connection "goes away," aka wait_timeout
is exceeded. I've set wait_timeout
at 10
so I can try this.
Here's my code:
def select(self, query, params=[]):
try:
self.cursor = self.cxn.cursor()
self.cursor.execute(query, params)
except MySQLdb.OperationalError, e:
if e[0] == 2006:
print "We caught the exception properly!"
print self.cxn
self.cxn.close()
self.cxn = self.db._get_cxn()
self.cursor = self.cxn.cursor()
self.cursor.execute(query, params)
print self.cxn
return self.cursor.fetchall()
Next I wait ten seconds and try to make a request. Here's what CherryPy looks like:
[31/Dec/2009:20:47:29] ENGINE Bus STARTING
[31/Dec/2009:20:47:29] ENGINE Starting database pool...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE Started monitor thread '_TimeoutMonitor'.
[31/Dec/2009:20:47:29] ENGINE Started monitor thread 'Autoreloader'.
[31/Dec/2009:20:47:30] ENGINE Serving on 0.0.0.0:8888
[31/Dec/2009:20:47:30] ENGINE Bus STARTED
We caught the exception properly! <====================================== Aaarg!
<_mysql.connection open to 'localhost' at 1ee22b0>
[31/Dec/2009:20:48:25] HTTP Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/CherryPy-3.1.2-py2.6.egg/cherrypy/_cprequest.py", line 606, in respond
cherrypy.response.body = self.handler()
File "/usr/local/lib/python2.6/dist-packages/CherryPy-3.1.2-py2.6.egg/cherrypy/_cpdispatch.py", line 25, in __call__
return self.callable(*self.args, **self.kwargs)
File "adp.py", line 69, in reports
page.sources = sql.GetSources()
File "/home/swoods/dev/adp/sql.py", line 45, in __call__
return getattr(self.formatter.cxn, parsefn)(sql, sql_vars)
File "/home/swoods/dev/adp/database.py", line 96, in select
self.cursor.execute(query, params)
File "/usr/lib/pymodules/python2.6/MySQLdb/cursors.py", line 166, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 35, in defaulterrorhandler
raise errorclass, errorvalue
OperationalError: (2006, 'MySQL server has gone away')
[31/Dec/2009:20:48:25] HTTP
Request Headers:
COOKIE: session_id=e14f63acc306b26f14d966e606612642af2dd423
HOST: localhost:8888
CACHE-CONTROL: max-age=0
ACCEPT: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
ACCEPT-CHARSET: ISO-8859-1,utf-8;q=0.7,*;q=0.3
USER-AGENT: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.0.249.43 Safari/532.5
CONNECTION: keep-alive
Remote-Addr: 127.0.0.1
ACCEPT-LANGUAGE: en-US,en;q=0.8
ACCEPT-ENCODING: gzip,deflate
127.0.0.1 - - [31/Dec/2009:20:48:25] "GET /reports/1 HTTP/1.1" 500 1770 "" "Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.0.249.43 Safari/532.5"
Why doesn't this work?? I clearly catch the exception, regenerate both the connection and the cursor, but it still doesn't work. Is it related to how MySQLdb gets the connections?
Can't see from the code, but my guess would be that the db._get_cxn()
method is doing some kind of connection pooling and returning the existing connection object instead of making a new one. Is there not a call you can make on db
to flush the existing useless connection? (And should you really be calling an internal _
-prefixed method?)
For preventing MySQL has gone away
I generally prefer to keep a timestamp with the connection of the last time I used it. Then before trying to use it again I look at the timestamp and close/discard the connection if it was last used more than a few hours ago. This saves wrapping every possible query with a try...except OperationalError...try again
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With