I was asked an algorithmic question today in an interview and i would love to get SO members' input on the same. The question was as follows;
Given equally sized N arrays with integers in ascending order, how would you select the numbers common to all N arrays.
At first my thought was to iterate over elements starting from the first array trickling down to the rest of the arrays. But then that would result in N power N iterations if i am right. So then i came up with a solution to add the count to a map by keeping the element as the key and the value as the counter. This way i believe the time complexity is just N. Following is the implementation in Java of my approach
public static void main(String[] args) {
int[] arr1 = { 1, 4, 6, 8,11,15 };
int[] arr2 = { 3, 4, 6, 9, 10,16 };
int[] arr3 = { 1, 4, 6, 13,15,16 };
System.out.println(commonNumbers(arr1, arr2, arr3));
}
public static List<Integer> commonNumbers(int[] arr1, int[] arr2, int[] arr3) {
Map<Integer, Integer>countMap = new HashMap<Integer, Integer>();
for(int element:arr1)
{
countMap.put(element, 1);
}
for(int element:arr2)
{
if(countMap.containsKey(element))
{
countMap.put(element,countMap.get(element)+1);
}
}
for(int element:arr3)
{
if(countMap.containsKey(element))
{
countMap.put(element,countMap.get(element)+1);
}
}
List<Integer>toReturn = new LinkedList<Integer>();
for(int key:countMap.keySet())
{
int count = countMap.get(key);
if(count==3)toReturn.add(key);
}
return toReturn;
}
I just did this for three arrays to see how it will work. Question talks about N Arrays though i think this would still hold.
My question is, is there a better approach to solve this problem with time complexity in mind?
Treat as 3 queues. While values are different, "remove" (by incrementing the array index) the smallest. When they match, "remove" (and record) the matches.
int i1 = 0;
int i2 = 0;
int i3 = 0;
while (i1 < array1.size && i2 < array2.size && i3 < array3.size) {
int next1 = array1[i1];
int next2 = array2[i2];
int next3 = array3[i3];
if (next1 == next2 && next1 == next3) {
recordMatch(next1);
i1++;
i2++;
i3++;
}
else if (next1 < next2 && next1 < next3) {
i1++;
}
else if (next2 < next1 && next2 < next3) {
i2++;
}
else {
i3++;
}
}
Easily generalized to N arrays, though with N large you'd want to optimize the compares somehow (NPE's "heap").
I think this can be solved with a single parallel iteration over the N arrays, and an N-element min-heap. In the heap you would keep the current element from each of the N input arrays.
The idea is that at each step you'd advance along the array whose element is at the top of the heap (i.e. is the smallest).
You'll need to be able to detect when the heap consists entirely of identical values. This can be done in constant time as long as you keep track of the largest element you've added to the heap.
If each array contains M elements, the worst-case time complexity of the would be O(M*N*log(N))
and it would require O(N)
memory.
This is how I learned to do it in an algorithms class. Not sure if it's "better", but it uses less memory and less overhead because it iterates straight through the arrays instead of building a map first.
public static List<Integer> commonNumbers(int[] arr1, int[] arr2, int[] arr3, ... , int[] arrN) {
List<Integer>toReturn = new LinkedList<Integer>();
int len = arr1.length;
int j = 0, k = 0, ... , counterN = 0;
for (int i = 0; i < len; i++) {
while (arr2[j] < arr1[i] && j < len) j++;
while (arr3[k] < arr1[i] && k < len) k++;
...
while (arrN[counterN] < arr1[i] && counterN < len) counterN++;
if (arr1[i] == arr2[j] && arr2[j] == arr3[k] && ... && arr1[i] == arrN[counterN]) {
toReturn.add(arr1[i]);
}
}
return toReturn;
}
This may be solved in O(M * N)
with M being the length of arrays.
Let's see what happens for N = 2
, this would be a sorted-list intersection problem, which has a classic merge-like solution running in O(l1 + l2)
time. (l1 = length of first array, l2 = length of second array). (Find out more about Merge Algorithms.)
Now, let's re-iterate the algorithm N times in an inductive matter. (e.g. i-th time we will have the i-th array, and the intersection result of previous step). This would result in an overall O(M * N)
algorithm.
You may also observe that this worst case upper-bound is the best achievable, since all the numbers must be taken into account for any valid algorithm. So, no deterministic algorithm with a tighter upper-bound may be founded.
try
public static Set<Integer> commonNumbers(int[] arr1, int[] arr2, int[] arr3) {
Set<Integer> s1 = createSet(arr1);
Set<Integer> s2 = createSet(arr2);
Set<Integer> s3 = createSet(arr3);
s1.retainAll(s2);
s1.retainAll(s3);
return s1;
}
private static Set<Integer> createSet(int[] arr) {
Set<Integer> s = new HashSet<Integer>();
for (int e : arr) {
s.add(e);
}
return s;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With