Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the point of multi-threading on a single core?

I've been playing with the Linux kernel recently and diving back into the days of OS courses from college.

Just like back then, I'm playing around with threads and the like. All this time I had been assuming that threads were automatically running concurrently on multiple cores but I've recently discovered that you actually have to explicitly code for handling multiple cores.

So what's the point of multi-threading on a single core? The only example I can think of is from college when writing a client/server program but that seems like a weak point.

like image 762
8protons Avatar asked May 14 '16 12:05

8protons


People also ask

What is the use of multithreading on a single core CPU?

What is the use of multithreading on single core CPU? In computer architecture, multithreading is the ability of a central processing unit (CPU) (or a single core in a multi-core processor) to executemultiple processes or threads concurrently, supported by the operating system. 8 clever moves when you have $1,000 in the bank.

What is the advantage of having more threads than cores?

Having more threads than cores means useful work can be done while high-latency tasks are resolved. The CPU has a thread scheduler that assigns priority to each thread, and allows a thread to sleep, then resume after a predetermined time.

What is the advantage of multi-threading over single threading?

2 Only multi-threading can give you the experience of multi-tasking where multiple threads from multiple processes get CPU slices in round robin fashion.

How many threads should I use per core?

The ideal usage of threads is, indeed, one per core. However, unless you exclusively use asynchronous/non-blocking IO, there's a good chance that you will have threads blocked on IO at some point, which will not use your CPU. Also, typical programming languages make it somewhat difficult to use 1 thread per CPU.


3 Answers

All this time I had been assuming that threads were automatically running concurrently on multiple cores but I've recently discovered that you actually have to explicitly code for handling multiple cores.

The above is incorrect for any widely used, modern OS. All of Linux's schedulers, for example, will automatically schedule threads on different cores and even automatically move threads from one core to another when necessary to maximize core usage. There are some APIs that allow you to modify the schedulers' behavior, but these APIs are generally used to disable automatic thread-to-core scheduling, not to enable it.

So what's the point of multi-threading on a single core?

Imagine you have a GUI program whose purpose is to execute an expensive computation (for example, render a 3D image or a Mandelbrot set) and then display the result. Let's say this computation takes 30 seconds to complete on this particular CPU. If you implement that program the obvious way, and use only a single thread, then the user's GUI controls will be unresponsive for 30 seconds while the calculation is executing -- the user will be unable to do anything with your program, and possibly unable to do anything with his computer at all. Since users expect GUI controls to be responsive at all times, that would be a poor user experience.

If you implement that program with two threads (one GUI thread and one rendering thread), on the other hand, the user will be able to click buttons, resize the window, quit the program, choose menu items, etc, even while the computation is executing, because the OS is able to wake up the GUI thread and allow it to handle mouse/keyboard events when necessary.

Of course, it is possible to write this program with a single thread and keep its GUI responsive, by writing your single thread to do just a few milliseconds worth of computation, then check to see if there are GUI events available to process, handling them, then going back to do a bit more computation, etc. But if you code your app this way, you are essentially writing your own (very primitive) thread scheduler inside your app anyway, so why reinvent the wheel?

The first versions of MacOS were designed to run on a single core, but had no real concept of multithreading. This forced every application developer to correctly implement some manual thread management -- even if their app did not have any extended computations, they had to explicitly indicate when they were done using the CPU, e.g. by calling WaitNextEvent. This lack of multithreading made early (pre-MacOS-X) versions of MacOS famously unreliable at multitasking, since just one poorly written application could bring the whole computer to a grinding halt.

like image 125
Jeremy Friesner Avatar answered Oct 08 '22 21:10

Jeremy Friesner


First, a program not only computes, but also waits for input/output and so can be considered as executing on an I/O processor. So even single-core machine is a multi-processor machine, and employing of multi-threading is justified.

Second, a task can be divided in several threads in the sake of modularity.

like image 25
Alexei Kaigorodov Avatar answered Oct 08 '22 21:10

Alexei Kaigorodov


Multithreading is not only for taking advantage of multiple cores.

You need multiple processes for multitasking. For similar reason you are allowed to have multiple threads, which are lightweight compared with processes.

You probably don't want to spawn processes all the time for things like blocking I/O. That may be overkill.

And there is fiber, which is even more lightweight. So we have process, thread, and fiber for different levels of needs.

like image 2
cshu Avatar answered Oct 08 '22 21:10

cshu