Link to the original problem
It's not a homework question. I just thought that someone might know a real solution to this problem.
I was on a programming contest back in 2004, and there was this problem:
Given n, find sum of digits of n!. n can be from 0 to 10000. Time limit: 1 second. I think there was up to 100 numbers for each test set.
My solution was pretty fast but not fast enough, so I just let it run for some time. It built an array of pre-calculated values which I could use in my code. It was a hack, but it worked.
But there was a guy, who solved this problem with about 10 lines of code and it would give an answer in no time. I believe it was some sort of dynamic programming, or something from number theory. We were 16 at that time so it should not be a "rocket science".
Does anyone know what kind of an algorithm he could use?
EDIT: I'm sorry if I didn't made the question clear. As mquander said, there should be a clever solution, without bugnum, with just plain Pascal code, couple of loops, O(n2) or something like that. 1 second is not a constraint anymore.
I found here that if n > 5, then 9 divides sum of digits of a factorial. We also can find how many zeros are there at the end of the number. Can we use that?
Ok, another problem from programming contest from Russia. Given 1 <= N <= 2 000 000 000, output N! mod (N+1). Is that somehow related?
1) Create a vector to store factorial digits and initialize it with 1. 2) One by one multiply numbers from 1 to n to the vector. We use school mathematics for this purpose. 3) Sum all the elements in vector and return the sum.
What is digit sum? We can obtain the sum of digits by adding the digits of a number by ignoring the place values. So, for example, if we have the number 567 , we can calculate the digit sum as 5 + 6 + 7 , which will give us 18 .
A factorial of 100 has 158 digits. It is not possible to store these many digits even if we use long long int.
The program outputs 93326215443944102188325606108575267240944254854960571509166910400407995064242937148632694030450512898042989296944474898258737204311236641477561877016501813248 as a result for 100! and says the summation of its digits is equal to 666.
Given a number n, write code to find the sum of digits in the factorial of the number. The first line of input contains an integer T denoting the number of test cases. Then T test cases follow. Each test case contains an integer n, the number.
A natural number is a factorion if it is the sum of the factorials of each of its decimal digits. For example, 145 is a factorion because 145 = 1! + 4! + 5!. Find every 3 -digit number which is a factorion. We can obviously only have factorials 1 – 6.
The factorials we all need to know are from 0! to 9!. Therefore, we can pre-compute the digital factorials and store them in a dictionary (or hash map). A single loop from 1 to 9 is sufficient as we are iteratively multiple the next number. We don’t need to and we can’t search infinite numbers.
In the problem statement you linked to, the biggest factorial that would need to be computed would be 1000!. This is a number with about 2500 digits. So just do this:
I'm not sure who is still paying attention to this thread, but here goes anyway.
First, in the official-looking linked version, it only has to be 1000 factorial, not 10000 factorial. Also, when this problem was reused in another programming contest, the time limit was 3 seconds, not 1 second. This makes a huge difference in how hard you have to work to get a fast enough solution.
Second, for the actual parameters of the contest, Peter's solution is sound, but with one extra twist you can speed it up by a factor of 5 with 32-bit architecture. (Or even a factor of 6 if only 1000! is desired.) Namely, instead of working with individual digits, implement multiplication in base 100000. Then at the end, total the digits within each super-digit. I don't know how good a computer you were allowed in the contest, but I have a desktop at home that is roughly as old as the contest. The following sample code takes 16 milliseconds for 1000! and 2.15 seconds for 10000! The code also ignores trailing 0s as they show up, but that only saves about 7% of the work.
#include <stdio.h> int main() { unsigned int dig[10000], first=0, last=0, carry, n, x, sum=0; dig[0] = 1; for(n=2; n <= 9999; n++) { carry = 0; for(x=first; x <= last; x++) { carry = dig[x]*n + carry; dig[x] = carry%100000; if(x == first && !(carry%100000)) first++; carry /= 100000; } if(carry) dig[++last] = carry; } for(x=first; x <= last; x++) sum += dig[x]%10 + (dig[x]/10)%10 + (dig[x]/100)%10 + (dig[x]/1000)%10 + (dig[x]/10000)%10; printf("Sum: %d\n",sum); }
Third, there is an amazing and fairly simple way to speed up the computation by another sizable factor. With modern methods for multiplying large numbers, it does not take quadratic time to compute n!. Instead, you can do it in O-tilde(n) time, where the tilde means that you can throw in logarithmic factors. There is a simple acceleration due to Karatsuba that does not bring the time complexity down to that, but still improves it and could save another factor of 4 or so. In order to use it, you also need to divide the factorial itself into equal sized ranges. You make a recursive algorithm prod(k,n) that multiplies the numbers from k to n by the pseudocode formula
prod(k,n) = prod(k,floor((k+n)/2))*prod(floor((k+n)/2)+1,n)
Then you use Karatsuba to do the big multiplication that results.
Even better than Karatsuba is the Fourier-transform-based Schonhage-Strassen multiplication algorithm. As it happens, both algorithms are part of modern big number libraries. Computing huge factorials quickly could be important for certain pure mathematics applications. I think that Schonhage-Strassen is overkill for a programming contest. Karatsuba is really simple and you could imagine it in an A+ solution to the problem.
Part of the question posed is some speculation that there is a simple number theory trick that changes the contest problem entirely. For instance, if the question were to determine n! mod n+1, then Wilson's theorem says that the answer is -1 when n+1 is prime, and it's a really easy exercise to see that it's 2 when n=3 and otherwise 0 when n+1 is composite. There are variations of this too; for instance n! is also highly predictable mod 2n+1. There are also some connections between congruences and sums of digits. The sum of the digits of x mod 9 is also x mod 9, which is why the sum is 0 mod 9 when x = n! for n >= 6. The alternating sum of the digits of x mod 11 equals x mod 11.
The problem is that if you want the sum of the digits of a large number, not modulo anything, the tricks from number theory run out pretty quickly. Adding up the digits of a number doesn't mesh well with addition and multiplication with carries. It's often difficult to promise that the math does not exist for a fast algorithm, but in this case I don't think that there is any known formula. For instance, I bet that no one knows the sum of the digits of a googol factorial, even though it is just some number with roughly 100 digits.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With