Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is there a huge difference in performance though time complexity for the two functions below seems to be similar?

I am trying to evaluate the performance of two Python methods that sort a list of numbers. The time complexity seems to be n^2 for both of them but empirical data shows one performs better than the other. Any reasons for this?

I wrote two methods, one using nested for loops and another that finds a max and adds the max to a new list (and removes from the old list) iteratively.

Method 1:

def mysort1(l):
    i = 0
    j = 1
    for i in range(0,len(l)-1):
        for j in range(i,len(l)):
            if l[i] > l[j]:
                tmp = l[j]
                l[j] = l[i]
                l[i] = tmp
    return l

Method 2:

def mysort2(l):
    nl = []
    for i in range(0,len(l)):
        m = max(l)
        nl.insert(0, m)
        l.remove(m)
    return nl

Both were tested with a list of 10000 numbers in reverse order. When using profile, Method 1 takes approximately 8 seconds (10000+ calls) and method 2 takes only 0.6 seconds (30000+ calls). Any reason why Method 2 performs so much better than Method 1 even though time complexity for both seems to be the same?

like image 207
sj7070 Avatar asked Sep 02 '19 14:09

sj7070


People also ask

Why time complexity is an important issue explain?

In simple words, every piece of code we write, takes time to execute. The time taken by any piece of code to run is known as the time complexity of that code. The lesser the time complexity, the faster the execution.

Which time complexity is faster?

Constant-Time Algorithm - O (1) - Order 1: This is the fastest time complexity since the time it takes to execute a program is always the same. It does not matter that what's the size of the input, the execution and the space required to run this will be the same.

Does functions increase time complexity?

No. When we talk about time complexity and space complexity, we always ignore constant factors; for example, a function f is O(n) if there exists some constant k, no matter how large, such that f(n) ≤ kn for all sufficiently large n.

Which is the best time complexity?

1. O(1) has the least complexity. Often called “constant time”, if you can create an algorithm to solve the problem in O(1), you are probably at your best.

What does time complexity tell us?

By definition, the time complexity is the amount of time taken by an algorithm to run, as a function of the length of the input. Here, the length of input indicates the number of operations to be performed by the algorithm. This gives a clear indication of what exactly Time complexity tells us.

What is time and space complexity analysis in machine learning?

So time and space complexity analysis is more about how fast and/or space consuming an algorithm A is compared to the algorithm B. It is a tool, which helps us decide which of the available algorithms is best suited for our use case.

What is the relation between time complexity and input data size?

And, there exists a relation between the input data size (n) and a number of operations performed (N) with respect to time. This relation is denoted as Order of growth in Time complexity and given notation O [n] where O is the order of growth and n is the length of the input.

How to find the worst-case time complexity of a function?

To find the worst-case time complexity of the function. Worst case happens in case of recursive. So, find out the recursive calls on the longer route. There are 5 such recursive calls to A (n/2). There are n unsorted arrays: A 1, A 2, ..., An. Assume that n is odd. Each of A 1, A 2, ..., An contains n distinct elements.


1 Answers

Essentially, as the comments have suggested, it boils down to the major implementation of Python in C. This answer points out that the real reason is that the C counterpart, the native implementation of the list operations max etc, is much faster than your Python implementation due to many people optimizing the code and generally, C runs faster than Python for operations like this.

Here is another answer to the question "Why are Python Programs often slower than the Equivalent Program Written in C or C++?".

From the answer:

Internally the reason that Python code executes more slowly is because code is interpreted at runtime instead of being compiled to native code at compile time.

Other interpreted languages such as Java bytecode and .NET bytecode run faster than Python because the standard distributions include a JIT compiler that compiles bytecode to native code at runtime. The reason why CPython doesn't have a JIT compiler already is because the dynamic nature of Python makes it difficult to write one. There is work in progress to write a faster Python runtime so you should expect the performance gap to be reduced in the future, but it will probably be a while before the standard Python distribution includes a powerful JIT compiler.

like image 115
dodgez Avatar answered Oct 21 '22 16:10

dodgez