This is more of a puzzle than a coding problem. I need to find how many binary numbers can be generated satisfying certain constraints. The inputs are
(integer) Len - Number of digits in the binary number
(integer) x
(integer) y
The binary number has to be such that taking any x adjacent digits from the binary number should contain at least y 1's.
For example -
Len = 6, x = 3, y = 2
0 1 1 0 1 1 - Length is 6, Take any 3 adjacent digits from this and there will be 2 l's
I had this C# coding question posed to me in an interview and I cannot figure out any algorithm to solve this. Not looking for code (although it's welcome), any sort of help, pointers are appreciated
With 4 bits, the maximum possible number is binary 1111 or decimal 15. The maximum decimal number that can be represented with 1 byte is 255 or 11111111. An 8-bit word greatly restricts the range of numbers that can be accommodated.
At each position of the string there can only be two possibilities, i.e., 0 or 1. Therefore, the total number of permutation of 0 and 1 in a string of length N is given by 2*2*2*… (N times), i.e., 2^N.
This problem can be solved using dynamic programming. The main idea is to group the binary numbers according to the last x-1 bits and the length of each binary number. If appending a bit sequence to one number yields a number satisfying the constraint, then appending the same bit sequence to any number in the same group results in a number satisfying the constraint also.
For example, x = 4, y = 2. both of 01011 and 10011 have the same last 3 bits (011). Appending a 0 to each of them, resulting 010110 and 100110, both satisfy the constraint.
Here is pseudo code:
mask = (1<<(x-1)) - 1
count[0][0] = 1
for(i = 0; i < Len-1; ++i) {
for(j = 0; j < 1<<i && j < 1<<(x-1); ++j) {
if(i<x-1 || count1Bit(j*2+1)>=y)
count[i+1][(j*2+1)&mask] += count[i][j];
if(i<x-1 || count1Bit(j*2)>=y)
count[i+1][(j*2)&mask] += count[i][j];
}
}
answer = 0
for(j = 0; j < 1<<i && j < 1<<(x-1); ++j)
answer += count[Len][j];
This algorithm assumes that Len >= x. The time complexity is O(Len*2^x).
EDIT
The count1Bit(j)
function counts the number of 1 in the binary representation of j
.
The only input to this algorithm are Len, x, and y
. It starts from an empty binary string [length 0, group 0]
, and iteratively tries to append 0 and 1 until length equals to Len. It also does the grouping and counting the number of binary strings satisfying the 1-bits constraint in each group. The output of this algorithm is answer
, which is the number of binary strings (numbers) satisfying the constraints.
For a binary string in group [length i, group j]
, appending 0 to it results in a binary string in group [length i+1, group (j*2)%(2^(x-1))]
; appending 1 to it results in a binary string in group [length i+1, group (j*2+1)%(2^(x-1))]
.
Let count[i,j]
be the number of binary strings in group [length i, group j]
satisfying the 1-bits constraint. If there are at least y
1 in the binary representation of j*2
, then appending 0 to each of these count[i,j]
binary strings yields a binary string in group [length i+1, group (j*2)%(2^(x-1))]
which also satisfies the 1-bit constraint. Therefore, we can add count[i,j]
into count[i+1,(j*2)%(2^(x-1))]
. The case of appending 1 is similar.
The condition i<x-1
in the above algorithm is to keep the binary strings growing when length is less than x-1.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With