Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

No operations allowed after connection closed errors in Slick/HikariCP

I'm using Slick3.1.1 + HikariCP2.5.1. My config is:

rdsConfig = {
  url = "jdbc:mysql://mydb.........us-west-2.rds.amazonaws.com:3306/owlschema"  

  driver = "com.mysql.jdbc.Driver"
  connectionPool = HikariCP
  maxConnections = 222   
  minConnections = 30
  keepAliveConnection = true
  properties = {
    user = "me"
    password = "mydarksecret"
  }
  numThreads = 40    
}

I'm running around 1 query per 3 seconds, with each query taking < 0.4s. At first everything runs fine, but after about 2 hours, HikariCP starts closing connections, resulting in errors about 'no operations allowed after connection closed':

15:20:38.288 DEBUG [] [rdsConfig-8] com.zaxxer.hikari.pool.HikariPool - rdsConfig - Timeout failure stats (total=30, active=0, idle=30, waiting=0)
15:20:38.290 DEBUG [] [rdsConfig connection closer] com.zaxxer.hikari.pool.PoolBase - rdsConfig - Closing connection com.mysql.jdbc.JDBC4Connection@229960c: (connection is evicted or dead)
15:20:38.333 DEBUG [] [rdsConfig connection closer] com.zaxxer.hikari.pool.PoolBase - rdsConfig - Closing connection com.mysql.jdbc.JDBC4Connection@229960c failed
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_77]
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) ~[na:1.8.0_77]
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) ~[na:1.8.0_77]
    at java.lang.reflect.Constructor.newInstance(Unknown Source) ~[na:1.8.0_77]

Is there some other config setting I need to avoid this? I don't understand why HikariCP is closing connections at all, and if it wants to, shouldn't it be only supplying my code with non-closed connections? It has a reputation for running well with its default settings so I'm confused as to why it's having problems. Thanks.

like image 728
thund Avatar asked Dec 07 '16 02:12

thund


1 Answers

HikariCP determined that the connection was dead, i.e. (connection is evicted or dead), and therefore attempted to close it. The driver then said, "Sorry the connection is already closed", which is not unexpected.

You might think, "Why do you need to close a dead connection?" Well, maybe it was only temporarily unavailable (or slow) so the validation test failed, but the connection is still "alive" from the driver's perspective. Closing, or at least attempting to, is essential to allow the driver opportunity to cleanup resources.

HikariCP closes connections in five cases:

  1. The connection failed validation. This is invisible to your application. The connection is retired and replaced. You would see a log message to the effect of Failed to validate connection....
  2. A connection was idle for longer than idleTimeout. This is invisible to your application. The connection is retired and replaced. You would see a closure reason of (connection has passed idleTimeout).
  3. A connection reached its maxLifetime. This is invisible to your application. The connection is retired and replaced. You would see a closure reason of (connection has passed maxLifetime), or if the connection is in use at the time of reaching maxLifetime you would see (connection is evicted or dead) at a later time.
  4. The user manually evicted a connection. This is invisible to your application. The connection is retired and replaced. You would see a closure reason of (connection evicted by user).
  5. A JDBC call threw an unrecoverable SQLException. This should be visible to your application. You would see a closure reason of (connection is broken).

There are quite a few variables here. I do not know what HikariCP default settings might be altered by Slick, in addition to user specified settings. You do not show surrounding logs, so I cannot tell if any other related issues. There is strangeness in the fact that your configuration shows 222 connections, but the pool stats logged at the Timeout Failure are (total=30, active=0, idle=30, waiting=0), so it appears that RDS may be capping you (?).

I suggest opening an issue on Github, attaching the log messages containing the pool settings at startup, attaching the section of log for the one minute preceding the exception, and grepping the log file for any other relevant warnings.

like image 65
brettw Avatar answered Oct 12 '22 13:10

brettw