Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the fastest way to calculate e to 2 trillion digits?

I want to calculate e to 2 trillion (2,000,000,000,000) digits. This is about 1,8 TiB of pure e. I just implemented a taylor series expansion algorithm using GMP (code can be found here).

Unfortuanetly it crashes when summing more than 4000 terms on my computer, probably because it runs out of memory.

What is the current state of the art in computing e? Which algorithm is the fastest? Any open source implementations that are worth looking at? Please don't mention y-cruncher, it's closed source.

like image 204
iblue Avatar asked Nov 09 '12 18:11

iblue


People also ask

What are the most efficient algorithms to calculate pi?

Ramanujan's work is the basis for the Chudnovsky algorithm, the fastest algorithms used, as of the turn of the millennium, to calculate π.

How long does it take to calculate pi?

Supercomputer took 108 days to run the calculations. Researchers in Switzerland are set to break the record for the most precise value of the mathematical constant pi, after using a supercomputer to calculate the famous number to its first 62.8 trillion decimal places.

How many digits are in pi right now?

Records are made to be broken. In 2019, we calculated 31.4 trillion digits of π — a world record at the time. Then, in 2021, scientists at the University of Applied Sciences of the Grisons calculated another 31.4 trillion digits of the constant, bringing the total up to 62.8 trillion decimal places.


2 Answers

Since I'm the author of the y-cruncher program that you mention, I'll add my 2 cents.

For such a large task, the two biggest barriers that must be tackled are as follows:

  1. Memory
  2. Run-time Complexity

Memory

2 trillion digits is extreme - to say the least. That's double the current record set by Shigeru Kondo and myself back in 2010. (It took us more than 9 days to compute 1 trillion digits using y-cruncher.)

In plain text, that's about 1.8 TiB in decimal. In packed binary representation, that's 773 GiB.

If you're going to be doing arithmetic on numbers of this size, you're gonna need 773 GiB for each operand not counting scratch memory.

Feasibly speaking, y-cruncher actually needs 8.76 TiB of memory to do this computation all in ram. So you can expect other implementations to need the same give or take a factor of 2 at most.

That said, I doubt you're gonna have enough ram. And even if you did, it'd be heavily NUMA. So the alternative is to use disk. But this is not trivial, as to be efficient, you need to treat memory as a cache and micromanage all data that is transferred between memory and disk.


Run-time Complexity

Here we have the other problem. For 2 trillion digits, you're gonna need a very fast algorithm. Not just any fast algorithm, but a quasi-linear run-time algorithm.

Your current attempt runs in about O(N^2). So even if you had enough memory, it won't finish in your lifetime.

The standard approach to computing e to high precision runs in O(N log(N)^2) and combines the following algorithms:

  • Binary Splitting on the Taylor series expansion of e.
  • FFT-based large multiplication

Fortunately, GMP already uses FFT-based large multiplication. But it lacks two crucial features:

  1. Out-of-core (swap) computation to use disk when there isn't enough memory.
  2. It isn't parallelized.

The second point isn't as important since you can just wait longer. But for all practical purposes, you're probably gonna need to roll out your own. And that's what I did when I wrote y-cruncher.


That said, there are many other loose-ends that also need to be taken care of:

  1. The final division will require a fast algorithm like Newton's Method.
  2. If you're gonna compute in binary, you're gonna need to do a radix conversion.
  3. If the computation is gonna take a lot of time and a lot of resources, you may need to implement fault-tolerance to handle hardware failures.
like image 111
Mysticial Avatar answered Oct 17 '22 04:10

Mysticial


Since you have a goal how many digits you want (2 trillion) you can estimate how many terms you'll need to calculate e to that number of digits. From this, you can estimate how many extra digits of precision you'll need to keep track of to avoid rounding errors at the 2 trillionth place.

If my calculation from Stirling's approximation is correct, the reciprocal of 10 to the 2 trillion is about the reciprocal of 100 billion factorial. So that's about how many terms you'll need (100 billion). The story's a little better than that, though, because you'll start being able to throw away a lot of the numbers in the calculation of the terms well before that.

Since e is calculated as a sum of inverse factorials, all of your terms are rational, and hence they are expressible as repeating decimals. So the decimal expansion of your terms will be (a) an exponent, (b) a non-repeating part, and (c) a repeating part. There may be some efficiencies you can take advantage of if you look at the terms in this way.

Anyway, good luck!

like image 7
John Avatar answered Oct 17 '22 02:10

John