Given an array with N elements, I am looking for M (M < N) successive sub-arrays with equal lengths or with lengths that differ by mostly 1. For example, if N = 12 and M = 4, all sub-arrays would have equal lengths of N/M = 3. If N = 100 and M = 12, I expect sub-arrays with lengths 8 and 9, and both sizes should be uniformly spread within the original array. This simple task turned to be a little bit subtle to implement. I came up with an adaptation of the Bresenham's line algorithm, which looks like this when coded in C++:
/// The function suggests how an array with num_data-items can be
/// subdivided into successively arranged groups (intervals) with
/// equal or "similar" length. The number of intervals is specified
/// by the parameter num_intervals. The result is stored into an array
/// with (num_data + 1) items, each of which indicates the start-index of
/// an interval, the last additional index being a sentinel item which
/// contains the value num_data.
///
/// Example:
///
/// Input: num_data ........... 14,
/// num_intervals ...... 4
///
/// Result: result_start_idx ... [ 0, 3, 7, 10, 14 ]
///
void create_uniform_intervals( const size_t num_data,
const size_t num_intervals,
std::vector<size_t>& result_start_idx )
{
const size_t avg_interval_len = num_data / num_intervals;
const size_t last_interval_len = num_data % num_intervals;
// establish the new size of the result vector
result_start_idx.resize( num_intervals + 1L );
// write the pivot value at the end:
result_start_idx[ num_intervals ] = num_data;
size_t offset = 0L; // current offset
// use Bresenham's line algorithm to distribute
// last_interval_len over num_intervals:
intptr_t error = num_intervals / 2;
for( size_t i = 0L; i < num_intervals; i++ )
{
result_start_idx[ i ] = offset;
offset += avg_interval_len;
error -= last_interval_len;
if( error < 0 )
{
offset++;
error += num_intervals;
} // if
} // for
}
This code calculates the interval lengths for N = 100, M=12: 8 9 8 8 9 8 8 9 8 8 9 8
The actual question is that I don't know how exactly to call my problem, so I had difficulty searching for it.
I needed the algorithm as a part of a bigger algorithm for clustering of data. I think it could also be useful for implementing a parallel sort(?).
If your language has integer division that truncates, an easy way to compute the size of section i
is via (N*i+N)/M - (N*i)/M
. For example, the python program
N=100;M=12
for i in range(M): print (N*i+N)/M - (N*i)/M
outputs the numbers 8 8 9 8 8 9 8 8 9 8 8 9. With N=12;M=5
it outputs 2 2 3 2 3. With N=12;M=3
it outputs 4 4 4.
If your section numbers are 1-based rather than 0-based, the expression is instead (N*i)/M - (N*i-N)/M
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With