Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Are algorithms rated on the big-o notation affected by parallelism?

I've just read an article about a breakthrough on matrix multiplication; an algorithm that is O(n^2.373). But I guess matrix multiplication is something that can be parallelized. So, if we ever start producing thousandth-cores processors, will this become irrelevant? How would things change?

like image 277
MaiaVictor Avatar asked Jun 23 '12 06:06

MaiaVictor


People also ask

What are the rules of using Big-O notation how the performance of the algorithms are measured?

Writing Big O Notation When we write Big O notation, we look for the fastest-growing term as the input gets larger and larger. We can simplify the equation by dropping constants and any non-dominant terms. For example, O(2N) becomes O(N), and O(N² + N + 1000) becomes O(N²).

What is parallelism in big data?

Data parallelism is a way of performing parallel execution of an application on multiple processors. It focuses on distributing data across different nodes in the parallel execution environment and enabling simultaneous sub-computations on these distributed data across the different compute nodes.

How does one measure the complexity of an algorithm explain the Big-O notation?

The Big O notation is used to express the upper bound of the runtime of an algorithm and thus measure the worst-case time complexity of an algorithm. It analyses and calculates the time and amount of memory required for the execution of an algorithm for an input value.

Why Big-O notation is a useful tool for comparing the efficiencies of algorithms?

Big-O notation counts how many steps an algorithm must execute to gauge its efficiency. Approaching your code in this manner can be very effective if you need to tune your code to increase efficiency.


2 Answers

Parallel execution doesn't change the basics of the complexity for a particular algorithm -- at best, you're just taking the time for some given size, and dividing by the number of cores. This may reduce time for a given size by a constant factor, but has no effect on the algorithm's complexity.

At the same time, parallel execution does sometimes change which algorithm(s) you want to use for particular tasks. Some algorithms that work well in serial code just don't split up into parallel tasks very well. Others that have higher complexity might be faster for practical-sized problems because they run better in parallel.

For an extremely large number of cores, the complexity of the calculation itself may become secondary to simply getting the necessary data to/from all the cores to do the calculation. most computations of big-O don't take these effects into account for a serial calculation, but it can become quite important for parallel calculations, especially for some models of parallel machines that don't give uniform access to all nodes.

like image 151
Jerry Coffin Avatar answered Sep 22 '22 16:09

Jerry Coffin


If quantum computing comes to something practical some day, then yes, complexity of algorithms will change.

In the meantime, parallelizing an algorithm, with a fixed number of processors, just divides its runtime proportionally (and that, in the best case, when there are no dependencies between the tasks performed at every processor). That means, dividing the runtime by a constant, and so the complexity remains the same.

like image 36
salva Avatar answered Sep 20 '22 16:09

salva