Alice has N coins of amount from 0 to (N-1). Bob wants to take k coins out of them, but Alice will only give if the set of K coins is interesting.
A set of coins is interesting if the sum of them is divisible by a unique integer M. Now Bob wants to know in how many ways he can get K coins.
Print the result by answer%(10^9+7)
Input format:- Three space separated integers N,K,M
Constraints:-
- 1 <= N, M <= 10 ^ 3
- 1 <= K <= 10 ^ 2
Sample input:- 4 2 2
Sample output:- 2({1,3},{2,4})
I tried solving the problem using combinations in python libraries but it resulted the Memory limit exceeded. Later I used the recursive method to it but it also resulted the Time limit exceeded. as it took 10 sec time for each private test cases.
Can anyone help in solving this effective manner?
Code of the recursion method:
cou=0
n,k,m = input().split()
out_ = solve(k,m,n)
print(out_)
def getCombinations(data,n,k,m):
val=[0]*k
combiUtil(data,n,k,0,val,0,m)
def combiUtil(data,n,k,ind,val,i,m):
global cou
if(ind==k):
if(sum(val)%m==0):
cou+=1
return
if(i>=n):
return
val[ind]=data[i]
combiUtil(data,n,k,ind+1,val,i+1,m)
combiUtil(data,n,k,ind,val,i+1,m)
def solve(k,m,n):
global cou
k=int(k)
m=int(m)
n=int(n)
ans =0
data = [j for j in range(1,n+1)]
getCombinations(data,n,k,m)
return cou%(10**9+7)
If you look at the time complexity for the 'try all combinations' brute force solution, it's equal to O((N choose K) * K) = O(K * N^K
), since there are N choose K
ways to pick K
distinct integers from 1 to N
inclusive, and evaluating their sum takes K-1
additions. For all but tiny values of N
and K
, this is astronomically large.
Better Solution: Dynamic Programming
A much faster and slightly simpler solution is dynamic programming. We can write this as a 3D dynamic programming problem:
Let dp[i][j][r], 0 <= i <= N; 0 <= j <= K; 0 <= r < M
be the number of combinations of j ints from [1, 2, ..., i]
with sum congruent to r modulo M. We want dp[N][K][0]
dp[i][j][r] = 1 if i == j == r == 0
0 if i == 0 and (j /= 0 or r /= 0)
1 if j == 0 and r == 0
0 if j == 0 and r /= 0
dp[i-1][j][r] + dp[i-1][j-1][(r-i) % M] otherwise
There's a lot of edge cases added into the formula, but the most important aspect is the last case: Our Dynamic Programming subproblem depends on at most 2 other subproblems, so the total runtime is the size of our DP array: O(nmk)
. Here's a Python implementation:
def get_combinations_dp(max_part_size: int, total_num_parts: int, mod: int) -> int:
BIG_MOD = 10 ** 9 + 7
# Optimization if no partitions exist
if total_num_parts > max_part_size:
return 0
largest_sum = ((max_part_size * (max_part_size + 1) // 2)
- ((max_part_size - total_num_parts) *
(max_part_size - total_num_parts + 1) // 2))
# Optimization if largest partition sum still smaller than mod
if largest_sum < mod:
return 0
dp = [[0 for _ in range(mod)] for _ in range(total_num_parts + 1)]
dp[0][0] = 1
for curr_max_part in range(1, max_part_size + 1):
for curr_num_parts in reversed(range(0, total_num_parts)):
for rem in range(mod):
dp[curr_num_parts + 1][(rem + curr_max_part) % mod] += dp[curr_num_parts][rem]
dp[curr_num_parts + 1][(rem + curr_max_part) % mod] %= BIG_MOD
return dp[total_num_parts][0]
The parameters are N, K, M
, renamed to max_part_size, total_num_parts, mod
, with some optional pre-checks to return 0
immediately if there's no partition.
Now, suppose you want to do better than O(nmk)
. Here, things get tricky. If you want to do strictly better, the only way I can imagine is to find the generating function for these partitions, and use FFT or some other fast polynomial multiplication modulo 10**9 + 7
. For a start on researching how to do this, I'd recommend this thread on the math stackexchange, which pertains to counting precisely these partitions in terms of more well known partition numbers, whose generating functions are already known. Even so, I was unable to find any mention of whether this generating function has a short representation, and using partition numbers directly can't improve on the O(nmk)
complexity.
Using combinatorics
If you still want to use this dynamic programming approach, there's a small modification using combinatorics which may be asymptotically faster when N
is larger than M*K
: it runs in time O((M*K)^2)
, which doesn't depend on N
. The idea is to use our DP formula, but instead of choosing K distinct integers from [1, ... N]
, we now choose K (possibly repeated) residue classes from [0, ... M-1]
.
How does this work? First, we need to count how many ints in [1, ... N]
fall into each residue class i mod M
. Call this number R[i]
, for 0 <= i < M
. You can calculate this as
R[i] = floor(N/M) + (1 if 0 < i <= N%M else 0)
Now we can write a slightly modified Dynamic Programming definition and formula:
Let dp[i][j][r], 0 <= i <= M; 0 <= j <= K; 0 <= r < M
be the number of combinations with replacement of j ints from
residue classes [0, 1, ... i-1] with sum congruent to r modulo M.
We want dp[M][K][0]:
dp[i][j][r] = 1 if i == j == r == 0
0 if i == 0 and (j /= 0 or r /= 0)
0 if i < 0 or j < 0
F(i, j, r) otherwise
F(i, j, r) = Sum from p = 0 to min(R[i], j) of:
(R[i] choose p) * dp[i-1][j-p][(r - i*p) % M]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With