I've gone through several question, this is somewhat related but doesn't answer my question.
Does the c3p0 connection pooling maxPoolSize
ensures that the number of connections at a certain time never exceeds this limit? What if the maxPoolSize=5
and 10 users start using the app exactly at the same time?
My app. configurations
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
<property name="driverClass"><value>${database.driverClassName}</value>/property>
<property name="jdbcUrl"><value>${database.url}</value></property>
<property name="user"><value>${database.username}</value></property>
<property name="password"><value>${database.password}</value></property>
<property name="initialPoolSize"><value>${database.initialPoolSize}</value>/property>
<property name="minPoolSize"><value>${database.minPoolSize}</value></property>
<property name="maxPoolSize"><value>${database.maxPoolSize}</value></property>
<property name="idleConnectionTestPeriod"><value>200</value></property>
<property name="acquireIncrement"><value>1</value></property>
<property name="maxStatements"><value>0</value></property>
<property name="numHelperThreads"><value>3</value></property>
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
<property name="dataSource" ref="dataSource"/>
</bean>
A connection pool is created for each unique connection string. When a pool is created, multiple connection objects are created and added to the pool so that the minimum pool size requirement is satisfied. Connections are added to the pool as needed, up to the maximum pool size specified (100 is the default).
Connection Pooling with the c3p0 Libraryc3p0 is an easy-to-use library for making traditional JDBC drivers “enterprise-ready” by augmenting them with functionality defined by the jdbc3 spec and the optional extensions to jdbc2. As of version 0.9. 5, c3p0 fully supports the jdbc4 spec.
Using connection pools helps to both alleviate connection management overhead and decrease development tasks for data access. Each time an application attempts to access a backend store (such as a database), it requires resources to create, maintain, and release a connection to that datastore.
it is important to distinguish between DataSources and Connection pools.
maxPoolSize is enforced by c3p0 on a per-pool basis. but a single DataSource may own multiple Connection pools, as there is (and must be) a distinct pool for each set of authentication credentials. if only the default dataSource.getConnection()
method is ever called, then maxPoolSize will be the maximum number of Connections that the pool acquires and manages. However, if Connections are acquired using dataSource.getConnection( user, password )
, then the DataSource may hold up to (maxPoolSize * num_distinct_users)
Connections.
to answer your specific question, if maxPoolSize
is 5 and 10 clients hit a c3p0 DataSource simultaneously, no more than 5 of them will get Connections at first. the remaining clients will wait()
until Connections are returned (or c3p0.checkoutTimeout
has expired).
some caveats: c3p0 enforces maxPoolSize
as described above. but there is no guarantee that, even if only a single per-auth pool is used, you won't occasionally see more than maxPoolSize
Connections checked out. for example, c3p0 expires and destroys Connections asynchronously. as far as c3p0 is concerned, a Connection is gone once it has been made unavailable to clients and marked for destruction, not when it has actually been destroyed. so, it is possible that, if maxPoolSize
is 5, that you'd occasionally observe 6 open Connections at the database. 5 connections would be active in the pool, while the 6th is in queue for destruction but not yet destroyed.
another circumstance where you might see unexpectedly many Connections open is if you modify Connection pool properties at runtime. in actual fact, the configuration of interior Connection pools is immutable. when you "change" a pool parameter at runtime, what actually happens is a new pool is started with the new configuration, and the old pool is put into a "wind-down" mode. Connections checked out of the old pool remain live and valid, but when they are checked in, they are destroyed. only when all old-pool Connections have been checked back in is the pool truly dead.
so, if you have a pool with maxPoolSize
Connections checked out, and then alter a configuration parameter, you might transiently see a spike of up to (2 * maxPoolSize)
, if the new pool is hit with lots of traffic before Connections checked out of the old pool have been returned. in practice, this is very rarely an issue, as dynamic reconfiguration is not so common, and Connection checkouts ought to be and usually are very brief, so the old-pool Connections disappear rapidly. but it can happen!
i hope this helps.
ps acquireIncrement
is best set something larger than 1. acquireIncrement
of 1 means no Connections are prefetched ahead of demand, so whenever load increases some Thread will directly experience the latency of Connection acquisition.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With