I am using the Executors
framework in Java to create thread pools for a multi-threaded application, and I have a question related to performance.
I have an application which can work in realtime or non-realtime mode. In case it's realtime, I'm simply using the following:
THREAD_POOL = Executors.newCachedThreadPool();
But in case it's not realtime, I want the ability to control the size of my thread pool. To do this, I'm thinking about 2 options, but I don't really understand the difference, and which one would perform better.
Option 1 is to use the simple way:
THREAD_POOL = Executors.newFixedThreadPool(threadPoolSize);
Option 2 is to create my own ThreadPoolExecutor
like this:
RejectedExecutionHandler rejectHandler = new RejectedExecutionHandler() {
@Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
executor.getQueue().put(r);
} catch (Exception e) {}
}
};
THREAD_POOL = new ThreadPoolExecutor(threadPoolSize, threadPoolSize, 0, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(10000), rejectHandler);
I would like to understand what is the advantage of using the more complex option 2, and also if I should use another data structure than LinkedBlockingQueue
? Any help would be appreciated.
Looking at the source code you'll realize that:
Executors.newFixedThreadPool(threadPoolSize);
is equivalent to:
return new ThreadPoolExecutor(threadPoolSize, threadPoolSize, 0L, MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
Since it doesn't provide explicit RejectedExecutionHandler
, default AbortPolicy
is used. It basically throws RejectedExecutionException
once the queue is full. But the queue is unbounded, so it will never be full. Thus this executor accepts inifnite1 number of tasks.
Your declaration is much more complex and quite different:
new LinkedBlockingQueue<Runnable>(10000)
will cause the thread pool to discard tasks if more than 10000 are awaiting.
I don't understand what your RejectedExecutionHandler
is doing. If the pool discovers it cannot put any more runnables to the queue it calls your handler. In this handler you... try to put that Runnable
into the queue again (which will fail in like 99% of the cases block). Finally you swallow the exception. Seems like ThreadPoolExecutor.DiscardPolicy
is what you are after.
Looking at your comments below seems like you are trying to block or somehow throttle clients if tasks queue is too large. I don't think blocking inside RejectedExecutionHandler
is a good idea. Instead consider CallerRunsPolicy
rejection policy. Not entirely the same, but close enough.
To wrap up: if you want to limit the number of pending tasks, your approach is almost good. If you want to limit the number of concurrent threads, the first one-liner is enough.
1 - assuming 2^31 is infinity
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With