Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Worse is better. Is there an example?

quick-sort has worst case time complexity of O(N^2) but it is usually considered better than other sorting algorithms which have O(N log n) time complexity in the worst case.


Simplex is an algorithm which has exponential time complexity in the worst case but for any real case it is polynomial. Probably polynomial algorithms for linear programming exist but they are very complicated and usually have large constants.


Monte Carlo integration is a probabilistic method of calculating definite integrals that has no guarantee of returning the correct answer. Yet, in real-world situations it returns an accurate answer far faster than provably correct methods.


"Worse is Better" can be seen in languages too, for example the ideas behind Perl, Python, Ruby, Php even C# or Java, or whatever language that isn't assembler or C (C++ might fit here or not).

Basically there is always a "perfect" solution, but many times its better to use a "worse" tool/algorithm/language to get results faster, and with less pain. Thats why people use these higher level languages, although they are "worse" from the ideal computer-language point of view, and instead are more human oriented.


Coppersmith–Winograd algorithm for square matrix multiplication. Its time complexity is O(n2.376) vs. O(n3) of a naive multiplication algorithm or vs. O(n2.807) for Strassen algorithm.

From the wikipedia article:

However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005).


This statement can be applied to nearly any parallel algorithm. The reason they were not heavily researched in the early days of computing is because, for a single thread of execution (think uniprocessor), they are indeed slower than their well-known sequential counterparts in terms of asymptotic complexity, constant factors for small n, or both. However, in the context of current and future computing platforms, an algorithm which can make use of a few (think multicore), few hundred (think GPU), or few thousand (think supercomputer) processing elements will beat the pants of the sequential version in wall-clock time, even if the total time/energy spent by all processors is much greater for the parallel version.

Sorts, graph algorithms, and linear algebra techniques alike can be accelerated in terms of wall-clock time by bearing the cost of a little extra bookkeeping, communication, and runtime overhead in order to parallelize.


Often an algorithm (like quicksort) that can be easily parallelized or randomized will be chosen over competing algorithms that lack these qualities. Furthermore, it is often the case that an approximate solution to a problem is acceptable when an exact algorithm would yield exponential runtimes as in the Travelling Salesman Problem.