Executors#newFixedThreadPool:
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
Executors#newCachedThreadPool:
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
Why the two threadPool use different Queue? I have looked up java doc about LinkedBlockingQueue and SynchronousQueue,but I still don't know why they are used here,Is performance considering or others?
The answer is in the documentation of the class ThreadPoolExecutor:
- Queuing
- Any {@link BlockingQueue} may be used to transfer and hold
- submitted tasks. The use of this queue interacts with pool sizing: * *
* *
* * There are three general strategies for queuing: * * *- If fewer than corePoolSize threads are running, the Executor * always prefers adding a new thread * rather than queuing.
* *- If corePoolSize or more threads are running, the Executor * always prefers queuing a request rather than adding a new * thread.
* *- If a request cannot be queued, a new thread is created unless * this would exceed maximumPoolSize, in which case, the task will be * rejected.
* *- Direct handoffs. A good default choice for a work * queue is a {@link SynchronousQueue} that hands off tasks to threads * without otherwise holding them. Here, an attempt to queue a task * will fail if no threads are immediately available to run it, so a * new thread will be constructed. This policy avoids lockups when * handling sets of requests that might have internal dependencies. * Direct handoffs generally require unbounded maximumPoolSizes to * avoid rejection of new submitted tasks. This in turn admits the * possibility of unbounded thread growth when commands continue to * arrive on average faster than they can be processed.
* *- Unbounded queues. Using an unbounded queue (for * example a {@link LinkedBlockingQueue} without a predefined * capacity) will cause new tasks to wait in the queue when all * corePoolSize threads are busy. Thus, no more than corePoolSize * threads will ever be created. (And the value of the maximumPoolSize * therefore doesn't have any effect.) This may be appropriate when * each task is completely independent of others, so tasks cannot * affect each others execution; for example, in a web page server. * While this style of queuing can be useful in smoothing out * transient bursts of requests, it admits the possibility of * unbounded work queue growth when commands continue to arrive on * average faster than they can be processed.
* *- Bounded queues. A bounded queue (for example, an * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when * used with finite maximumPoolSizes, but can be more difficult to * tune and control. Queue sizes and maximum pool sizes may be traded * off for each other: Using large queues and small pools minimizes * CPU usage, OS resources, and context-switching overhead, but can * lead to artificially low throughput. If tasks frequently block (for * example if they are I/O bound), a system may be able to schedule * time for more threads than you otherwise allow. Use of small queues * generally requires larger pool sizes, which keeps CPUs busier but * may encounter unacceptable scheduling overhead, which also * decreases throughput.
In practice the first type of queue immediately send to the available threads a new Runnable, the second type holds it if all the threads are busy.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With