Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

CPU and/or RAM productivity when working with big integers

Tags:

c#

cpu

yesterday I was solving one exam problem, when found something very interesting (at least for me). The program is for factorials (very big ones) and the result is how much zeroes there are on the end of the number (in some cases 2500 zeros..). So I did what I could, but found that when enter number like 100 000 it takes exactly 1;30 - 1;33min to output the result. I thought its because of my CPU (it is not very fast). I've sent the .exe to some of my friends to try it because they have very good PCs when we are talking about performance - exactly the same result (1;33min).

My question is why is the time to solve the task the same. I know there are better ways to write my core so it wouldn't take so long, but this is very important for me to understand as a beginner programmer. So here is my code:

static void Main()
        {
            int num = int.Parse(Console.ReadLine()),
                zeroCounter = 0;
            BigInteger fact = 1;
            var startTime = DateTime.Now;

            Console.WriteLine();

            for (int i = 1; i <= num; i++)
            {
                fact *= i;
                Console.Write("\r{0}", DateTime.Now - startTime);
            }
            BigInteger factTarget = fact;
            while (factTarget % 10 == 0)
            {
                factTarget /= 10;
                zeroCounter++;
                Console.Write("\r{0}", DateTime.Now - startTime);
            }
            Console.WriteLine();
            Console.WriteLine("Result is number with {0} zeros.", zeroCounter);
            Console.WriteLine();
            Console.WriteLine("Finished for: {0}", DateTime.Now - startTime);
            Console.WriteLine();
            Console.WriteLine("\nPres any key to exit...");
            Console.ReadKey();
        }

I am very sorry If this is the wrong place to ask, I did my best to find what I was looking for before I post this.

like image 865
Kadiyski Avatar asked May 02 '15 08:05

Kadiyski


1 Answers

The thing that I notice immediately about your code is that you have included Console.WriteLine() statements in your computational loops.

The fact is, I/O is much slower for a computer to handle than computations, even under ideal conditions. And I wouldn't say that the Windows console window is a particularly efficient implementation of that particular kind of I/O. Furthermore, I/O tends to be less dependent on CPU and memory differences from machine to machine.

In other words, it seems very likely to me that you are primarily measuring I/O throughput and not computational throughput, and so it's not surprising to see consistent results between machines.

For what it's worth, when I run your example on my laptop, if I disable the output I can complete the computation in about a minute. I get something closer to your 1:30 time if I use the code as-is.


EDIT: I recommend the answer from Hans Passant as well. Memory I/O is still I/O and is, as I describe above, much less variable from machine to machine than CPU speed. It's my hope that the above general-purpose description gives ideas for where the difference could be (without access to each of the machines in question, there's not really any way to know for sure what is the cause), but Hans's answer provides some very good detail about the memory I/O issue in particular and is very much worth reading.

like image 179
Peter Duniho Avatar answered Oct 30 '22 14:10

Peter Duniho