So I know I can increase the number of threads of a process in Linux using setrlimit and friends. According to this, the theoretical limit on the number of threads is determined by memory (somewhere around 100,000k). For my use I'm looking into using the FIFO scheduler in a cooperative style, so spurious context switches aren't a concern. I know I can limit the number of active threads to the number of cores. My question is what the pratical limit on the number of threads are, after which assumptions in the scheduler start being voilated. If I maintain a true cooperative style are additional threads "free"? Any case studied or actual examples would be especially interesting.
The Apache server seems to be the most analagous program to this situation. Does anybody have any numbers related to how many threads they've seen Apache spawn before becoming useless?
Related, but has to do with Windows, pre-emptive code.
I believe the number of threads is limited
by the available memory (each thread need at least several pages, and often many of them, notably for its stack and thread local storage). See the pthread_attr_setstacksize function to tune that. Threads stack space of a megabyte each are not uncommon.
At least on Linux (NPTL i.e. current Glibc) and other systems where users threads are the same as kernel threads, but the number of tasks the kernel can schedule.
I would guess that on most Linux systems, the second limitation is stronger than the first. Kernel threads (on Linux) are created thru the clone(2)
Linux system call. In old Unix or Linux kernels, the number of tasks was hardwired. It is probably tunable today, but I guess it is in the many thousands, not the millions!
And you should consider coding in the Go language, its goroutines are the feather-light threads you are dreaming of.
If you want many cooperative threads, you could look into Chicken Scheme implementation tricks.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With