We are facing APPARENT DEADLOCK while using c3p0 0.9.5.1 ( which is latest version of c3p0). following is the connection pool config we are using.
p:driverClass="${app.jdbc.driverClassReplica}"
p:jdbcUrl="jdbc:mysql://database,database/dbname"
p:acquireIncrement="5"
p:idleConnectionTestPeriod="300"
p:maxPoolSize="100"
p:maxStatements="2000"
p:minPoolSize="10"
p:maxIdleTime="1800"
p:maxConnectionAge="3600"
p:maxIdleTimeExcessConnections="20"
p:numHelperThreads="15"
p:preferredTestQuery="SELECT 1"/>
and Following are the logs
ThreadPoolAsynchronousRunner:743---- com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@70f6e5f5 -- A
PPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks! #]
2015-06-20 11:36:15 WARN ThreadPoolAsynchronousRunner:759---- com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@70f6e5f5 -- A
PPARENT DEADLOCK!!! Complete Status:
Managed Threads: 15
Active Threads: 15
Active Tasks:
Every active task looks like
com.mchange.v2.c3p0.stmt.GooGooStatementCache$StatementDestructionManager$1UncheckedStatementCloseTask@4ec69595
on thread: C3P0PooledConnectionPoolManager[identityToken->z8kflt9a33udv812q4fqf|60dffe1d]-HelperThread-#6
Pending Tasks:
com.mchange.v2.c3p0.stmt.GooGooStatementCache$StatementDestructionManager$1UncheckedStatementCloseTask@2b131ea8
com.mchange.v2.c3p0.stmt.GooGooStatementCache$StatementDestructionManager$1UncheckedStatementCloseTask@7441bdaf
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@80c67ca
com.mchange.v2.c3p0.stmt.GooGooStatementCache$StatementDestructionManager$1UncheckedStatementCloseTask@667202e6
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@471c7e95
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@1fba8cac
com.mchange.v2.c3p0.stmt.GooGooStatementCache$StatementDestructionManager$1UncheckedStatementCloseTask@1069807a
com.mchange.v2.c3p0.stmt.GooGooStatementCache$StatementDestructionManager$1UncheckedStatementCloseTask@7e71d200
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@62923eda
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@6f5c8cc4
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@251dd0fa
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@4882e01f
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@848386a
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@3d6fbb65
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@72780365
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@25271699
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@293ca9dd
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@4db40151
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@64c294b1
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@22b02425
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@5a150aed
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@1b807bcf
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@10406124
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@72a98ad1
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@58d8da26
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@2a013697
com.mchange.v2.c3p0.stmt.GooGooStatementCache$1StmtAcquireTask@35a7090c
com.mchange.v2.c3p0.stmt.GooGooStatementCache$StatementDestructionManager$1UncheckedStatementCloseTask@69430e58
com.mchange.v2.c3p0.stmt.GooGooStatementCache$StatementDestructionManager$1UncheckedStatementCloseTask@3162e965
com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@54c8ff37
com.mchange.v2.c3p0.stmt.GooGooStatementCache$StatementDestructionManager$1UncheckedStatementCloseTask@57eb9f5d
Pool thread stack traces: All 15 threads are looking like following
Thread[C3P0PooledConnectionPoolManager[identityToken->z8kflt9a33udv812q4fqf|60dffe1d]-HelperThread-#6,5,main]
com.mysql.jdbc.StatementImpl.close(StatementImpl.java:575)
com.mchange.v1.db.sql.StatementUtils.attemptClose(StatementUtils.java:53)
com.mchange.v2.c3p0.stmt.GooGooStatementCache$StatementDestructionManager$1UncheckedStatementCloseTask.run(GooGooStatementCache.java:938)
Whenever this happens, db becomes unresponsive for some time and number of connections increases to db. Is it a issue with c3p0 ?? Will switching to some other jdbc pool like hikaricp or boncecp help ?
The issue you have is with the Statement cache. Switching to other pools that don't cache Statements might help. But turning off statement caching within c3p0 by setting maxStatements
to 0 would help in precisely the same way. If you don't cache statements, you don't have to worry about deadlocks in the Statement cache. But perhaps you enjoy a performance boost from Statement caching.
Fortunately, if you want, you can retain the performance benefit of statement caching, without migrating to a different pool.
The issue is that some DBMS/JDBC drivers can't deal with a Statement getting closed at the same time as its parent Connection is in use. Formally, that ought to be okay, but in practice it is not for some JDBC drivers. When the Statement cache tries to expire a Statement whose parent Connection happens to be in use, the call to close()
deadlocks, eventually saturating and freezing the Thread pool.
c3p0 includes a workaround for these fragile drivers.
Set the c3p0 config parameter statementCacheNumDeferredCloseThreads to 1, and c3p0 will neurotically track whether the parent of an expiring Statement is in use, and defer the close()
call until it is not. This setting should, hopefully, resolve your issue.
I guess in your Spring XML the config would look like
p:statementCacheNumDeferredCloseThreads="1"
I hope this helps!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With