Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Algorithm to apply permutation in constant memory space

Tags:

I saw this question is a programming interview book, here I'm simplifying the question.

Assume you have an array A of length n, and you have a permutation array P of length n as well. Your method will return an array where elements of A will appear in the order with indices specified in P.

Quick example: Your method takes A = [a, b, c, d, e] and P = [4, 3, 2, 0, 1]. then it will return [e, d, c, a, b]. You are allowed to use only constant space (i.e. you can't allocate another array, which takes O(n) space).

Ideas?

like image 522
ahmet alp balkan Avatar asked May 11 '13 20:05

ahmet alp balkan


People also ask

What does constant space mean in algorithm?

Constant space means that the amount of space that your algorithm uses is independent of the input parameters. Say you are given an array of size n. If the amount of space your algorithm uses scales with n, then it's not constant.

How do you store all permutations in an array?

All permutations of an array using STL in C++ Approach: The next possible permutation of the array can be found using next_permutation() function provided in STL. Syntax: bool next_permutation (BidirectionalIterator first, BidirectionalIterator last);

Is space complexity always constant?

We can clearly see that the space complexity is constant, so, it can be expressed in big-O notation as O(1). Again, let's list all variables present in the above code: array – the function's only argument – the space taken by the array is equal 4n bytes where n is the length of the array. size – a 4-byte integer.

What is build array from permutation?

Build Array from Permutation. Easy. Given a zero-based permutation nums (0-indexed), build an array ans of the same length where ans[i] = nums[nums[i]] for each 0 <= i < nums.length and return it. A zero-based permutation nums is an array of distinct integers from 0 to nums.length - 1 (inclusive).


2 Answers

There is a trivial O(n^2) algorithm, but you can do this in O(n). E.g.:

A = [a, b, c, d, e]

P = [4, 3, 2, 0, 1]

We can swap each element in A with the right element required by P, after each swap, there will be one more element in the right position, and do this in a circular fashion for each of the positions (swap elements pointed with ^s):

[a, b, c, d, e] <- P[0] = 4 != 0 (where a initially was), swap 0 (where a is) with 4
 ^           ^
[e, b, c, d, a] <- P[4] = 1 != 0 (where a initially was), swap 4 (where a is) with 1
    ^        ^
[e, a, c, d, b] <- P[1] = 3 != 0 (where a initially was), swap 1 (where a is) with 3
    ^     ^
[e, d, c, a, b] <- P[3] = 0 == 0 (where a initially was), finish step

After one circle, we find the next element in the array that does not stay in the right position, and do this again. So in the end you will get the result you want, and since each position is touched a constant time (for each position, at most one operation (swap) is performed), it is O(n) time.

You can stored the information of which one is in its right place by:

  1. set the corresponding entry in P to -1, which is unrecoverable: after the operations above, P will become [-1, -1, 2, -1, -1], which denotes that only the second one might be not in the right position, and a further step will make sure it is in the right position and terminates the algorithm;

  2. set the corresponding entry in P to -n - 1: P becomes [-5, -4, 2, -1, -2], which can be recovered in O(n) trivially.

like image 112
zw324 Avatar answered Sep 19 '22 14:09

zw324


Yet another unnecessary answer! This one preserves the permutation array P explicitly, which was necessary for my situation, but sacrifices in cost. Also this does not require tracking the correctly placed elements. I understand that a previous answer provides the O(N) solution, so I guess this one is just for amusement!

We get best case complexity O(N), worst case O(N^2), and average case O(NlogN). For large arrays (N~10000 or greater), the average case is essentially O(N).

Here is the core algorithm in Java (I mean pseudo-code *cough cough*)

int ind=0;
float temp=0;

for(int i=0; i<(n-1); i++){
  // get next index
  ind = P[i];
  while(ind<i)
    ind = P[ind];

  // swap elements in array
  temp = A[i];
  A[i] = A[ind];
  A[ind] = temp;
}

Here is an example of the algorithm running (similar to previous answers):

let A = [a, b, c, d, e]

and P = [2, 4, 3, 0, 1]

then expected = [c, e, d, a, b]

i=0:  [a, b, c, d, e] // (ind=P[0]=2)>=0 no while loop, swap A[0]<->A[2]
       ^     ^
i=1:  [c, b, a, d, e] // (ind=P[1]=4)>=1 no while loop, swap A[1]<->A[4]
          ^        ^
i=2:  [c, e, a, d, b] // (ind=P[2]=3)>=2 no while loop, swap A[2]<->A[3]
             ^  ^
i=3a: [c, e, d, a, b] // (ind=P[3]=0)<3 uh-oh! enter while loop...
                ^
i=3b: [c, e, d, a, b] // loop iteration: ind<-P[0]. now have (ind=2)<3
       ?        ^
i=3c: [c, e, d, a, b] // loop iteration: ind<-P[2]. now have (ind=3)>=3
             ?  ^
i=3d: [c, e, d, a, b] // good index found. Swap A[3]<->A[3]
                ^
done.

This algorithm can bounce around in that while loop for any indices j<i, up to at most i times during the ith iteration. In the worst case (I think!) each iteration of the outer for loop would result in i extra assignments from the while loop, so we'd have an arithmetic series thing going on, which would add an N^2 factor to the complexity! Running this for a range of N and averaging the number of 'extra' assignments needed by the while loop (averaged over many permutations for each N, that is), though, strongly suggests to me that the average case is O(NlogN).

Thanks!

like image 21
RinRisson Avatar answered Sep 20 '22 14:09

RinRisson