A thread in cached thread pool can be idle for 60 seconds before it is terminated. A cached thread pool is created by calling newCachedThreadPool() of Executors class.
ThreadPool will create maximum of 10 threads to process 10 requests at a time. After process completion of any single Thread, ThreadPool will internally allocate the 11th request to this Thread and will keep on doing the same to all the remaining requests.
Cached thread pools are using “synchronous handoff” to queue new tasks. The basic idea of synchronous handoff is simple and yet counter-intuitive: One can queue an item if and only if another thread takes that item at the same time. In other words, the SynchronousQueue can not hold any tasks whatsoever.
The ThreadPoolExecutor has the following several key behaviors, and your problems can be explained by these behaviors.
When tasks are submitted,
In the first example, note that the SynchronousQueue has essentially size of 0. Therefore, the moment you reach the max size (3), the rejection policy kicks in (#4).
In the second example, the queue of choice is a LinkedBlockingQueue which has an unlimited size. Therefore, you get stuck with behavior #2.
You cannot really tinker much with the cached type or the fixed type, as their behavior is almost completely determined.
If you want to have a bounded and dynamic thread pool, you need to use a positive core size and max size combined with a queue of a finite size. For example,
new ThreadPoolExecutor(10, // core size
50, // max size
10*60, // idle timeout
TimeUnit.SECONDS,
new ArrayBlockingQueue<Runnable>(20)); // queue with a size
Addendum: this is a fairly old answer, and it appears that JDK changed its behavior when it comes to core size of 0. Since JDK 1.6, if the core size is 0 and the pool does not have any threads, the ThreadPoolExecutor will add a thread to execute that task. Therefore, the core size of 0 is an exception to the rule above. Thanks Steve for bringing that to my attention.
Unless I've missed something, the solution to the original question is simple. The following code implements the desired behavior as described by the original poster. It will spawn up to 5 threads to work on an unbounded queue and idle threads will terminate after 60 seconds.
tp = new ThreadPoolExecutor(5, 5, 60, TimeUnit.SECONDS,
new LinkedBlockingQueue<Runnable>());
tp.allowCoreThreadTimeOut(true);
Had same issue. Since no other answer puts all issues together, I'm adding mine:
It is now clearly written in docs: If you use a queue that does not blocks (LinkedBlockingQueue
) max threads setting has no effect, only core threads are used.
so:
public class MyExecutor extends ThreadPoolExecutor {
public MyExecutor() {
super(4, 4, 5,TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>());
allowCoreThreadTimeOut(true);
}
public void setThreads(int n){
setMaximumPoolSize(Math.max(1, n));
setCorePoolSize(Math.max(1, n));
}
}
This executor has:
No concept of max threads as we are using an unbounded queue. This is a good thing because such queue may cause executor to create massive number of non-core, extra threads if it follows its usual policy.
A queue of max size Integer.MAX_VALUE
. Submit()
will throw RejectedExecutionException
if number of pending tasks exceeds Integer.MAX_VALUE
. Not sure we will run out of memory first or this will happen.
Has 4 core threads possible. Idle core threads automatically exit if idle for 5 seconds.So, yes, strictly on demand threads.Number can be varied using setThreads()
method.
Makes sure min number of core threads is never less than one, or else submit()
will reject every task. Since core threads need to be >= max threads the method setThreads()
sets max threads as well, though max thread setting is useless for an unbounded queue.
In your first example, subsequent tasks are rejected because the AbortPolicy
is the default RejectedExecutionHandler
. The ThreadPoolExecutor contains the following policies, which you can change via the setRejectedExecutionHandler
method:
CallerRunsPolicy
AbortPolicy
DiscardPolicy
DiscardOldestPolicy
It sounds like you want cached thread pool with a CallerRunsPolicy.
None of the answers here fixed my problem, which had to do with creating a limited amount of HTTP connections using Apache's HTTP client (3.x version). Since it took me some hours to figure out a good setup, I'll share:
private ExecutorService executor = new ThreadPoolExecutor(5, 10, 60L,
TimeUnit.SECONDS, new SynchronousQueue<Runnable>(),
Executors.defaultThreadFactory(), new ThreadPoolExecutor.CallerRunsPolicy());
This creates a ThreadPoolExecutor
which starts with five and holds a maximum of ten simultaneously running threads using CallerRunsPolicy
for executing.
Per the Javadoc for ThreadPoolExecutor:
If there are more than corePoolSize but less than maximumPoolSize threads running, a new thread will be created only if the queue is full. By setting corePoolSize and maximumPoolSize the same, you create a fixed-size thread pool.
(Emphasis mine.)
jitter's answer is what you want, although mine answers your other question. :)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With