We are given an array of n elements and an integer k. Suppose that we want to slide a window of length k across the array, reporting the largest value contained in each window. For example, given the array
15 10 9 16 20 14 13
Given a window of length 4, we would output
[15 10 9 16] 20 14 13 ---> Output 16
15 [10 9 16 20] 14 13 ---> Output 20
15 10 [ 9 16 20 14] 13 ---> Output 20
15 10 9 [16 20 14 13] ---> Output 20
So the result would be
16 20 20 20
I was approaching the problem by keeping track of the maximum element of the window at each point, but ran into a problem when the largest element gets slid out of the window. At that point, I couldn't think of a fast way to figure out what the largest remaining element is.
Does anyone know of an efficient algorithm for solving this problem?
Simple Approach: Run a loop for i from 0 to n – 1, where n is the size of the array. Now, we will run a nested loop for j from i to n – 1 and add the value of the element at index j to a variable currentMax. Lastly, for every subarray, we will check if the currentMax is the maximum sum of all contiguous subarrays.
The number of all possible subarrays of an array of size N is N * (N + 1)/2.
Naive Approach: The simplest approach to solve the problem is to traverse the array and for every index i, traverse from over-index and find the length of the longest subarray satisfying the given condition starting from i. Shift i to the index which does not satisfy the condition and check from that index.
An array is given, find length of the subarray having maximum sum. Examples : Input : a[] = {1, -2, 1, 1, -2, 1} Output : Length of the subarray is 2 Explanation : Subarray with consecutive elements and maximum sum will be {1, 1}.
You can keep a Binary Search Tree of the current elements, for example, save them as value-occurrence pairs. Other than that, you sliding window algorithm should be good enough.
This way, select maximum (the max element in the subsection) will cost O(logL) time, L being the length of the current subsection; add new would also be O(logL). TO delete the oldest one, just search the value and decrements the count by 1, if the count goes to 0 delete it.
So the total time will be O(NlogL), N being the length of the array.
This older question discusses how to build a queue data structure supporting insert, dequeue, and find-min all in O(1) time. Note that this is not a standard priority queue, but instead is a queue in which at any point you can find the value of the smallest element it contains in O(1) time. You could easily modify this structure to support find-max in O(1) instead of find-min, since that's more relevant to this particular problem.
Using this structure, you can solve this problem in O(n) time as follows:
This takes a total of O(n) time, since you visit each array element once, enqueuing and dequeuing each at most once, and calling find-max exactly n-k times. I think this is pretty cool, since the complexity is independent of k, which doesn't initially seem like it necessarily should be possible.
Hope this helps! And thanks for asking a cool question!
The best I can come up with quickly is O(n log m). You can get that by dynamic programming.
In the first pass you find max for every element the maximum from the element itself and the next.
Now you have n maximums (window size = 2).
Now you can find on this array the maximum from every element and the overnext in this array (gives you for each element the maximum for the next 4, ie window size = 4).
Then you can do it again, and again (and every time the window size doubles).
As one clearly sees the window size grows exponentially.
Therefor the runtime is O(n log m). The implementation is a bit tricky, because you must consider the corner and special cases (esp. when the windows size should not be a power of two), but they didnt influence the asymptotic runtime.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With