What is the time-complexity of np.transpose?
In my opinion, it loops two for loops inside so, that means it should have O(n2) complexity but can someone confirm on that? Also, is there any way so, that I can reduce the time-complexity of matrix transpose
In memory the matrices are represented as blocks of contiguous memory, that is as if it were a one-dimensional array. N-dimensions are an abstraction that we humans use to make the problem more understandable. For numpy, transposing the matrix is simply an axis change, but the memory is not changed.
So the temporal complexity is O(1) because to transpose an array, numpy just swaps the shape and stride information for each axis.
No data needs to be copied for this to happen. Numpy can simply change how it looks at the underlying memory to construct the new array.
If you want to deepen the topic you can see this beautiful answer with illustrations
It is O(1), because it does not copy data at all. Just modifies the shape and strides.
>>> A = np.random.rand(3,4)
>>> A.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
WRITEBACKIFCOPY : False
UPDATEIFCOPY : False
>>> np.transpose(A).flags
C_CONTIGUOUS : False
F_CONTIGUOUS : True
OWNDATA : False
WRITEABLE : True
ALIGNED : True
WRITEBACKIFCOPY : False
UPDATEIFCOPY : False
Note that C_CONTIGUOUS
, F_CONTIGUOUS
were swapped (i.e. major iteration order changes), and the transposed array has OWNDATA
false (i.e. it is just a view into the original array's data).
Related tip: when you have a view like this, to find the array owning the data you can check the base
attribute
>>> np.transpose(A).base is A
True
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With