What is the Big-O complexity for widespread algorithms of the basic arithmetic operations like multiplication, square root, logarithm, scalar and matrix product?
Are there exotic algorithms which are more efficient, in terms of Big-O complexity, but are not very widespread in practical solutions (e.g. not implemented in popular software libraries)?
You'll consider most simple operations as O(1) because your input size is usually fixed (i.e. 32- or 64-bits). Under normal conditions, your platform will be performing exactly the same operation for multiplication, square root, logarithm etc.
Big O Notation is a way to measure an algorithm's efficiency. It measures the time it takes to run your function as the input grows. Or in other words, how well does the function scale. There are two parts to measuring efficiency — time complexity and space complexity.
So, the time complexity is the number of operations an algorithm performs to complete its task (considering that each operation takes the same amount of time). The algorithm that performs the task in the smallest number of operations is considered the most efficient one in terms of the time complexity.
See http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
Matrix product of square matrices:
There is also a O(N2.38) Coppersmith–Winograd algorithm but I don't think it's wide-spread due to the huge hidden constant.
Big-int multiplication:
There are also an n log n · 2O(log* n) algorithm published in 2008 but that was too new to be widespread.
Usually the naïve method is good enough for normal-sized input.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With