I am currently facing 'JBAS014516: Failed to acquire a permit within 5 MINUTES' issue with my EJB-JBOSS configuration.Below is my configuration -
<subsystem xmlns="urn:jboss:domain:ejb3:1.4">
<session-bean>
<stateless>
<bean-instance-pool-ref pool-name="slsb-strict-max-pool"/>
</stateless>
<stateful default-access-timeout="5000" cache-ref="simple"/>
<singleton default-access-timeout="5000"/>
</session-bean>
<mdb>
<resource-adapter-ref resource-adapter-name="hornetq-ra"/>
<bean-instance-pool-ref pool-name="mdb-strict-max-pool"/>
</mdb>
<pools>
<bean-instance-pools>
<strict-max-pool name="slsb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/>
<strict-max-pool name="mdb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/>
</bean-instance-pools>
</pools>
<caches>
<cache name="simple" aliases="NoPassivationCache"/>
<cache name="passivating" passivation-store-ref="file" aliases="SimpleStatefulCache"/>
<cache name="clustered" passivation-store-ref="abc" aliases="StatefulTreeCache"/>
</caches>
<async thread-pool-name="default"/>
<timer-service thread-pool-name="default">
<data-store path="timer-service-data" relative-to="jboss.server.data.dir"/>
</timer-service>
<remote connector-ref="remoting-connector" thread-pool-name="default"/>
<thread-pools>
<thread-pool name="default">
<max-threads count="10"/>
<keepalive-time time="100" unit="milliseconds"/>
</thread-pool>
</thread-pools>
<iiop enable-by-default="false" use-qualified-name="false"/>
<default-security-domain value="other"/>
<default-missing-method-permissions-deny-access value="true"/>
</subsystem>
To resolve this should i increase the 'strict-max-pool' to a higher value or increase thread pool size.
It is really hard to suggest a good approach without understanding of your use case, but most probably you are invoking a method on you EJB bean, that takes too long to execute, gradually exhausting the instances in pool until there is none left for the caller process. As more and more requests for this operation come in, the EJB container will try to provide the client with next free item in the pool. Normally, if the operation on bean instance finishes, the instance would be returned to the pool and could be used for next client call. If the operation takes long time, the pool will get exhausted until there is no available instance left to serve the client call. Based on your config, the EJB container has 20 instances; if none is available, it will try to wait for 5 minutes whether some instance wont be returned to the pool. If it fails to acquire an instance in that time, it will throw the above mentioned error to the caller.
So where does that lead us:
First and foremost analyze the EJB operation that takes that long(it is very useful to add a simple EJB interceptor to your deployment, that would track start and finish of you EJB calls plus track execution time)
Determine who calls that EJB instance - maybe it is doing excessive amounts of invocations against that bean.
If the long running operation can not be avoided or optimized, increase the pool size, so that more instances of that bean are available to the clients(adjust max-pool-size
)
If your use case requires long running operations, but does not need to block and wait for their result, consider asynchronous processing along with JMS queue - create jobs in the queue and then execute them using MDB. You can still store and query the status of your processing via DB.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With