What's the recommended corePoolSize passed to ThreadPoolExecutor/ScheduledThreadPoolExecutor?
Runtime.getRuntime().availableProcessors()
?
Runtime.getRuntime().availableProcessors() * 2
?
From one point, I'd like the CPU (all the cores) to be utilized 100% but as little threads to do it as possible, in order for them to finish as quickly as possible and not have too much penalty on context switching.
On the other hand, some of the threads might not utilize the CPU all the time, like waiting for network. In that case, I'd like new threads to be spawned and keep all the cores busy.
I'm OK if there's a temporary over-usage of the CPU. Better than under-usage and tasks not being handled.
So how can I achieve this thread load balance? Thanks.
Sizing Thread pools depends on the nature of the tasks you are going to execute on that pool. As a general rule, it depends on the ratio between wait time and cpu time and the number of cpus available.
The general formula to apply is:
The optimal pool size for keeping the processors at the desired utilization is:
You can find out more information in Java Concurrency In Practice, section 8.2 Sizing Thread Pools.
As addition to john16384's answer:
Your code Runtime.getRuntime().availableProcessors()? Runtime.getRuntime().availableProcessors() * 2
might be a good idea if your CPU supports two threads per CPU core (multithreading).
You could look into Amdahl's law to define the number of core. Take your program as if it was a single threaded program and see where you can parallelize . That can give you an idea of N (here we could approximate the number of cores, by the number of threads of the application).
Now in my experience, the best way to size the "best" number of threads remains to create your multi threaded program, and test with as many possible value of thread number and see which one outputs the best performance. (see also Optimal number of threads per core)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With