Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Parallelize already linear-time algorithm

In practice, is there any case that an already linear-time algorithm need to be parallelized? My teacher argues that it is not worth it but I don't believe so.

like image 534
minhle_r7 Avatar asked Dec 01 '22 07:12

minhle_r7


2 Answers

Your teacher is mistaken. The run-time complexity (O(n), O(log n), etc.) of the single-CPU algorithm has no bearing on whether or not it will benefit from parallelization.

Changing your code from using 1 CPU to using K CPUs will at best divide the run time by a factor of K. Since you can't arbitrarily create CPUs out of thin air, K is effectively a constant. So, the run-time complexity is not affected by parallelization. All you can hope to do is get a constant factor improvement.

Which isn't to say that it's not worth doing - in some cases, a two-fold improvement is hugely beneficial. Plus, in a massively parallel system of thousands of CPUs, that constant gets pretty big.

like image 63
mbeckish Avatar answered Dec 09 '22 19:12

mbeckish


Definite YES. Graphic cards offer parallelism, and switching from CPU to parallel computation on GPU can save a lot of time. A linear time algorithm can have a monumental speedup when executed in parallel. See GPGPU and "applications" section, or google for "graphic card computation".

Although you did not ask, the answer in theory is also definite yes, there is a complexity class NC for problems that can be "effectively parallelized" (can be solved in logarithmic time given polynomial number of processors), and "P-complete" problems which can be solved in polynomial time, but are suspected not to be in NC. (just like there are P problems and NP-complete problems, and NP-complete are suspected not to be in P)

like image 33
sdcvvc Avatar answered Dec 09 '22 20:12

sdcvvc