Like the Big O notation "O(1)" can describe following code:
O(1): for (int i = 0; i < 10; i++) { // do stuff a[i] = INT; } O(n): for (int i = 0; i < n; i++) { // do stuff a[i] = INT; } O(n^2): for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { // do stuff a[i][j] = INT; } }
Another question:
O(nlogn) is known as loglinear complexity. O(nlogn) implies that logn operations will occur n times. O(nlogn) time is common in recursive sorting algorithms, sorting algorithms using a binary tree sort and most other types of sorts. The above quicksort algorithm runs in O(nlogn) time despite using O(logn) space.
Logarithmic running time ( O(log n) ) essentially means that the running time grows in proportion to the logarithm of the input size - as an example, if 10 items takes at most some amount of time x , and 100 items takes at most, say, 2x , and 10,000 items takes at most 4x , then it's looking like an O(log n) time ...
Logarithmic time complexity log(n): Represented in Big O notation as O(log n), when an algorithm has O(log n) running time, it means that as the input size grows, the number of operations grows very slowly.
Classic example:
while (x > 0) { x /= 2; }
This will be:
Iteration | x ----------+------- 0 | x 1 | x/2 2 | x/4 ... | ... ... | ... k | x/2^k
2k = x → Applying log to both sides → k = log(x)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With