Assuming that the computer running this program has an infinite amount of memory, I'm interested in where Python will break when running the following:
For fun, I implemented hyperoperators in python as the module hyperop
. One of my examples is Graham's number:
def GrahamsNumber():
# This may take awhile...
g = 4
for n in range(1,64+1):
g = hyperop(g+2)(3,3)
return g
The condensed version of the class hyperop
looks like this:
def __init__(self, n):
self.n = n
self.lower = hyperop(n - 1)
def _repeat(self, a, b):
if self.n == 1:
yield a
i = 1
while True:
yield a
if i == b:
break
i += 1
def __call__(self, a, b):
return reduce(lambda x, y: self.lower(y, x), self._repeat(a, b))
Essentially the library is just a recursive fold-right operation, with a special definition for the base case of n=1. Originally __call__
was beautifully golfed as:
return reduce(lambda x, y: self.lower(y, x), [a,]*b)
However, it turns out that you can't make a list with more elements than the size of a C long. That was a fun limitation that most Python programmers probably don't encounter in their normal day-to-day and it inspired the following question.
Where, if at all, will the
hyperop
calculation fail due to a technical limitation of python (specifically 2.7.10)?
Maybe original version of hyperop is robust and fails because of some esoteric cause but this exact code fails because hyperop constructor calls itself and it raises RuntimeError with "maximum recursion depth exceeded" (after sys.setrecursionlimit of recursive calls - which is 1000 in 2.7.10 by default probably).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With