In practice, is there any case that an already linear-time algorithm need to be parallelized? My teacher argues that it is not worth it but I don't believe so.
Your teacher is mistaken. The run-time complexity (O(n), O(log n), etc.) of the single-CPU algorithm has no bearing on whether or not it will benefit from parallelization.
Changing your code from using 1 CPU to using K CPUs will at best divide the run time by a factor of K. Since you can't arbitrarily create CPUs out of thin air, K is effectively a constant. So, the run-time complexity is not affected by parallelization. All you can hope to do is get a constant factor improvement.
Which isn't to say that it's not worth doing - in some cases, a two-fold improvement is hugely beneficial. Plus, in a massively parallel system of thousands of CPUs, that constant gets pretty big.
Definite YES. Graphic cards offer parallelism, and switching from CPU to parallel computation on GPU can save a lot of time. A linear time algorithm can have a monumental speedup when executed in parallel. See GPGPU and "applications" section, or google for "graphic card computation".
Although you did not ask, the answer in theory is also definite yes, there is a complexity class NC for problems that can be "effectively parallelized" (can be solved in logarithmic time given polynomial number of processors), and "P-complete" problems which can be solved in polynomial time, but are suspected not to be in NC. (just like there are P problems and NP-complete problems, and NP-complete are suspected not to be in P)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With