I recently had a question looking at the source code of ThreadPoolExecutor: If thread pool represents the reuse of existing threads to reduce the overhead of thread creation or destruction, why not reuse core threads in the initial phase? That is, when the current number of threads is less than the number of core threads, first check whether there are core threads that have completed the task, if so, reuse. Why not? Instead of creating a new thread before the number of core threads is reached, does this violate thread pool design principles?
The following is a partial comment on the addWorker() method in ThreadPoolExecutor
- @param firstTask the task the new thread should run first (or null if none). Workers are created with an initial first task (in method execute()) to bypass queuing when there are fewer than corePoolSize threads (in which case we always start one), or when the queue is full (in which case we must bypass queue). Initially idle threads are usually created via prestartCoreThread or to replace other dying workers.
This was actually requested already: JDK-6452337. A core libraries developer has noted:
I like this idea, but ThreadPoolExecutor is already complicated enough.
Keep in mind that corePoolSize
is an essential part of ThreadPoolExecutor
and is saying how many workers are always active/idle at least. Reaching this number just naturally takes a very short time. You set corePoolSize
according to your needs and it's expected that the workload will meet this number.
My assumption is that optimizing this "warm-up phase" – taking it for granted that this will actually increase efficiency – is not worth it. I can't quantify for you what additional complexity this optimization will bring, I'm not developing Java Core libraries, but I assume that it's not worth it.
You can think of it like that: The "warm-up phase" is constant while the thread pool will run for an undefined amount of time. In an ideal world, the initial phase actually should take no time at all, the workload should be there as you create the thread pool. So you are thinking about an optimization that optimizes something that is not the expected thread pool state.
The thread workers will have to be created at some point anyways. This optimization only delays the creation. Imagine you have a corePoolSize
of 10, so there is the overhead of creating 10 threads at least. This overhead won't change if you do it later. Yes, resources are also taken later but here I'm asking if the thread pool is configured correctly in the first place: Is corePoolSize
correct, does it meet the current workload?
Notice that ThreadPoolExecutor
has methods like setCorePoolSize(int)
and allowCoreThreadTimeOut(boolean)
and more that allow you to configure the thread pool according to your needs.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With