I'm playing around with the JVM (Oracle 1.7 64 bit) on a Linux box (AMD 6 Core, 16 GB RAM) to see how the number of threads in an application affects performance. I'm hoping to measure at which point context switching degrades performance.
I have created a little application that creates a thread execution pool:
Executors.newFixedThreadPool(numThreads)
I adjust numThreads
everytime I run the program, to see the effect it has.
I then submit numThread
jobs (instances of java.util.concurrent.Callable
) to the pool. Each one increments an AtomicInteger
, does some work (creates an array of random integers and shuffles it), and then sleeps a while. The idea is to simulate a web service call. Finally, the job resubmits itself to the pool, so that I always have numThreads
jobs working.
I am measuring the throughput, as in the number of jobs that are processed per minute.
With several thousand threads, I can process up to 400,000 jobs a minute. Above 8000 threads, the results start to vary a lot, suggesting that context switching is becoming a problem. But I can continue to increase the number of threads to 30,000 and still get higher throughput (between 420,000 and 570,000 jobs per minute).
Now the question: I get a java.lang.OutOfMemoryError: Unable to create new native thread
with more than about 31,000 jobs. I have tried setting -Xmx6000M
which doesn't help. I tried playing with -Xss
but that doesn't help either.
I've read that ulimit
can be useful, but increasing with ulimit -u 64000
didn't change anything.
For info:
[root@apollo ant]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 127557
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
So the question #1: What do I have to do to be able to create a bigger thread pool?
Question #2: At what stage should I expect to see context switching really reducing throughput and causing the process to grind to a halt?
Here are some results, after I modified it to do a little more processing (as was suggested) and started recording average response times (as was also suggested).
// ( (n_cores x t_request) / (t_request - t_wait) ) + 1
// 300 ms wait, 10ms work, roughly 310ms per job => ideal response time, 310ms
// ideal num threads = 1860 / 10 + 1 = 187 threads
//
// results:
//
// 100 => 19,000 thruput, 312ms response, cpu < 50%
// 150 => 28,500 thruput, 314ms response, cpu 50%
// 180 => 34,000 thruput, 318ms response, cpu 60%
// 190 => 35,800 thruput, 317ms response, cpu 65%
// 200 => 37,800 thruput, 319ms response, cpu 70%
// 230 => 42,900 thruput, 321ms response, cpu 80%
// 270 => 50,000 thruput, 324ms response, cpu 80%
// 350 => 64,000 thruput, 329ms response, cpu 90%
// 400 => 72,000 thruput, 335ms response, cpu >90%
// 500 => 87,500 thruput, 343ms response, cpu >95%
// 700 => 100,000 thruput, 430ms response, cpu >99%
// 1000 => 100,000 thruput, 600ms response, cpu >99%
// 2000 => 105,000 thruput, 1100ms response, cpu >99%
// 5000 => 131,000 thruput, 1600ms response, cpu >99%
// 10000 => 131,000 thruput, 2700ms response, cpu >99%, 16GB Virtual size
// 20000 => 140,000 thruput, 4000ms response, cpu >99%, 27GB Virtual size
// 30000 => 133,000 thruput, 2800ms response, cpu >99%, 37GB Virtual size
// 40000 => - thruput, -ms response, cpu >99%, >39GB Virtual size => java.lang.OutOfMemoryError: unable to create new native thread
I interpret them as:
1) Even though the application is sleeping for 96.7% of the time, that still leaves lots of thread switching to be done 2) Context switching is measurable, and is shown in the response time.
What is interesting here is that When tuning an app, you'd might choose the acceptable response time, say 400ms, and increase number of threads until you get that response time, which in this case would let the app process around 95 thousand requests a minute.
Often people say that the ideal number of threads is near the number of cores. In apps that have wait time (blocked threads, say waiting for a database or web service to respond), the calculation needs to consider that (see my equation above). But even that theoretical ideal isn't an actual ideal, when you look at the results or when you tune to a specific response time.
1. Account for 1GB of ram per thread available on the system. This is often the most critical. having a 32 Thread system with 16GB or less of ram is seriously inhibiting the processing capability of the system.
Ideally, no I/O, synchronization, etc., and there's nothing else running, use 48 threads of task. Realistically, use about 95 threads may be better to exploit the max of your machine. Because: a core waits for data or I/O sometimes, so thread 2 could run while thread 1 not running.
Multi-threading gets around requiring additional memory because it relies on a shared memory between threads. Shared memory removes the additional memory overhead but still incurs the penalty of increased context switching.
Unless you're talking about ridiculous numbers of threads (tens of thousands) the memory consumption is negligible on modern systems.
I get a java.lang.OutOfMemoryError: Unable to create new native thread with more than about 31,000 jobs. I have tried setting -Xmx6000M which doesn't help. I tried playing with -Xss but that doesn't help either.
The -Xmx setting won't help because thread stacks are not allocated from the heap.
What is happening is that the JVM is asking the OS for a memory segment (outside of the heap!) to hold the stack, and the OS is refusing the request. The most likely reasons for this are a ulimit or an OS memory resource issue:
The "data seg size" ulimit, is unlimited, so that shouldn't be the problem.
So that leaves memory resources. 30,000 threads at 1Mb a time is ~30Gb which is a lot more physical memory than you have. My guess is that there is enough swap space for 30Gb of virtual memory, but you have pushed the boundary just a bit too far.
The -Xss setting should help, but you need to make the requested stack size LESS than the default size of 1m
. And besides there is a hard minimum size.
Question #1: What do I have to do to be able to create a bigger thread pool?
Decrease the default stack size below what it currently is, or increase the amount of available virtual memory. (The latter is NOT recommended since it looks like you are already seriously over-allocating already.)
Question #2: At what stage should I expect to see context switching really reducing throughput and causing the process to grind to a halt?
It is not possible to predict that. It will be highly dependent on what the threads are actually doing. And indeed, I don't think that your benchmarking is going to give you answers that will tell you how a real multi-threaded application is going to behave.
The Oracle site says this on the topic of thread stackspace:
In Java SE 6, the default on Sparc is 512k in the 32-bit VM, and 1024k in the 64-bit VM. On x86 Solaris/Linux it is 320k in the 32-bit VM and 1024k in the 64-bit VM.
On Windows, the default thread stack size is read from the binary (java.exe). As of Java SE 6, this value is 320k in the 32-bit VM and 1024k in the 64-bit VM.
You can reduce your stack size by running with the -Xss option. For example:
java -server -Xss64k
Note that on some versions of Windows, the OS may round up thread stack sizes using very coarse granularity. If the requested size is less than the default size by 1K or more, the stack size is rounded up to the default; otherwise, the stack size is rounded up to a multiple of 1 MB.
64k is the least amount of stack space allowed per thread.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With