Space and time are considered as barometers of analyzing the complexity of an algorithm. But these days with the presence of GPU on mobile devices, there are numerous possible applications which can use that high-performance to run complex algorithms on a mobile device. For eg: iOS's Metal framework can be used for GPGPU operations. But needless to say it consumes a lot of power. So, my question is, if I am developing/implementing, say, a graph search algorithm, on a mobile device, should't I also consider the "power" complexity of my algorithm along with space-time? Now, I know the argument could be that power is something that the algorithm doesn't consume itself directly and I completely agree with that. So, maybe my grammar here is incorrect in saying that power is another dimension of measuring an algorithm's efficiency. But shouldn't power be seen as a performance measure of an algorithm?
To express the time complexity of an algorithm, we use something called the “Big O notation”. The Big O notation is a language we use to describe the time complexity of an algorithm. It's how we compare the efficiency of different approaches to a problem, and helps us to make decisions.
Complexity is measured in two dimensions: time (how long a function takes to complete), and space (how much memory a function consumes while executing). So a simple way to think about a function's complexity is to consider the number of things it does or creates (x) as multiplied by the size of the input (n).
When we analyse an algorithm, we use a notation to represent its time complexity and that notation is Big O notation. For Example: time complexity for Linear search can be represented as O(n) and O(log n) for Binary search (where, n and log(n) are the number of operations).
No. Complexity explains how the algorithm scales in time / memory. Power will be a function of time and memory.
Say you have algorithm A - O(N^2) and B - O(N^3) and they both solve the same problem. For n = 1000 B uses 1 unit of power while A uses 20. Now as you scale it up to n=10k B will need 1000 units of power while A will need only 2000. At n = 100k B will need 1'000'000 while A will need 200'000. And so on. This assumes that the energy consumption is constant for the execution of the algorithm.
By the way the same thing happens with time. For example for short arrays nothing beats linear search.
For a specific case (rendering UI on fixed resolution) it makes sense to measure power usage and optimize it. But what works for the resolution today will not necessarily be the right thing tomorrow.
For this to be possible, you need a model of energy consumption that you can relate to the atomic operations in your algorithms.
Like "a multiply consumes one unit of energy" and "a memory slot uses two units of energy per unit of time". Maybe the relation Energy = Time x Space could make sense.
Anyway, such a "naïve" model may suffer the same phenomenon as the model of time complexity: it doesn't bear any similarity with the behavior of modern architectures and can be wrong by orders of magnitude.
Using more accurate models would be just intractable analytically.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With