Every process has at least one thread, but there is no maximum number of threads a process can use. For specialized tasks, the more threads you have, the better your computer's performance will be. With multiple threads, a single process can handle a variety of tasks simultaneously.
The maximum user processes (nproc) limit on Linux counts the number of threads within all processes that can exist for a given user. The default value of nproc is 1024 on some versions of Linux, which is generally an insufficient number of threads for all processes.
I cover thread limits first since every active process has at least one thread (a process that's terminated, but is kept referenced by a handle owned by another process won't have any), so the limit on processes is directly affected by the caps that limit threads.
In Linux terminology, simultaneous multithreading is also known as SMT or Hyper-Threading. With multithreading enabled, a single core on the hardware is mapped to multiple logical CPUs on Linux. Thus, multiple threads can issue instructions to a core simultaneously during each cycle.
Linux doesn't have a separate threads per process limit, just a limit on the total number of processes on the system (threads are essentially just processes with a shared address space on Linux) which you can view like this:
cat /proc/sys/kernel/threads-max
The default is the number of memory pages/4. You can increase this like:
echo 100000 > /proc/sys/kernel/threads-max
There is also a limit on the number of processes (and hence threads) that a single user may create, see ulimit/getrlimit
for details regarding these limits.
This is WRONG to say that LINUX doesn't have a separate threads per process limit.
Linux implements max number of threads per process indirectly!!
number of threads = total virtual memory / (stack size*1024*1024)
Thus, the number of threads per process can be increased by increasing total virtual memory or by decreasing stack size. But, decreasing stack size too much can lead to code failure due to stack overflow while max virtual memory is equals to the swap memory.
Check you machine:
Total Virtual Memory: ulimit -v
(default is unlimited, thus you need to increase swap memory to increase this)
Total Stack Size: ulimit -s
(default is 8Mb)
Command to increase these values:
ulimit -s newvalue
ulimit -v newvalue
*Replace new value with the value you want to put as limit.
References:
http://dustycodes.wordpress.com/2012/02/09/increasing-number-of-threads-per-process/
In practical terms, the limit is usually determined by stack space. If each thread gets a 1MB stack (I can't remember if that is the default on Linux), then you a 32-bit system will run out of address space after 3000 threads (assuming that the last gb is reserved to the kernel).
However, you'll most likely experience terrible performance if you use more than a few dozen threads. Sooner or later, you get too much context-switching overhead, too much overhead in the scheduler, and so on. (Creating a large number of threads does little more than eat a lot of memory. But a lot of threads with actual work to do is going to slow you down as they're fighting for the available CPU time)
What are you doing where this limit is even relevant?
proper 100k threads on linux:
ulimit -s 256
ulimit -i 120000
echo 120000 > /proc/sys/kernel/threads-max
echo 600000 > /proc/sys/vm/max_map_count
echo 200000 > /proc/sys/kernel/pid_max
./100k-pthread-create-app
2018 update from @Thomas, on systemd systems:
/etc/systemd/logind.conf: UserTasksMax=100000
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With