I think I "get" the basics of multi-threading with Java. If I'm not mistaken, you take some big job and figure out how you are going to chunk it up into multiple (concurrent) tasks. Then you implement those tasks as either Runnable
s or Callable
s and submit them all to an ExecutorService
. (So, to begin with, if I am mistaken on this much, please start by correcting me!!!)
Second, I have to imagine that the code you implement inside run()
or call()
has to be as "parallelized" as possible, using non-blocking algorithms, etc. And that this is where the hard part is (writing parallel code). Correct? Not correct?
But the real problem I'm still having with Java concurrency (and I guess concurrency in general), and which is the true subject of this question, is:
When is it even appropriate to multi-thread in the first place?
I saw an example from another question on Stack Overflow where the poster proposed creating multiple threads for reading and processing a huge text file (the book Moby Dick), and one answerer commented that multi-threading for the purpose of reading from disk was a terrible idea. Their reasoning for this was because you'd have multiple threads introducing the overhead of context-switching, on top of an already slow process (disk access).
So that got me thinking: what classes of problems are appropriate for multi-threading, what classes of problems should always be serialized? Thanks in advance!
Multithreading is the ability of a program or an operating system to enable more than one user at a time without requiring multiple copies of the program running on the computer. Multithreading can also handle multiple requests from the same user.
We should use or we need Multithreading in C# to perform the multiple tasks at a time. The main objective of multithreading is to execute two or more parts of a program at a time to utilize the CPU time. The multithreaded program includes two or more parts that can run concurrently.
Multi-threading has two main advantages, IMO:
Note: the problem with reading from the same disk from multiple threads is that instead of reading the whole long file sequentially, it would force the disk to switch between various physical locations of the disk at each context switch. Since all the threads are waiting for the disk-reading to finish (they're IO-bound), this makes the reading slower than if a single thread read everything. But once the data is in memory, it would make sense to split the work between threads.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With