This is a simplified version of the actual question I am working on.
Suppose that we have a sequence of (mathematical) functions, f_0, f_1, f_2, ..., such that

We fix a function f_0, say, f_0(x)=x. In this simple case, f_n(x) = 1/((n+1)!) x^{n+1}. Is there an elegant and efficient way to program this recursively, so that Python returns the function f_n given an arbitrary function f_0?
I started out trying to return f_2, but this already failed:
from scipy.integrate import quad
f_0=lambda x: x
f_1=lambda x: quad(f_0,0,x)
f_2=lambda x: quad(f_1,0,x)
returns the error
error: Supplied function does not return a valid float.
after trying to evaluate e.g. f_2(3).
There is nothing wrong with chaining numeric integrals. The problem is what quad returns:
Returns:
y : float The integral of func from a to b.abserr : float An estimate of the absolute error in the result.
So you get two return values but only the first one is interesting (unless you want to propagate the error over all integrals, but I don't know how to do that). The second integral complains because the function returns a tuple of two values instead of only one scalar.
The following small modification will fix the error by selecting the first of quad's return values:
from scipy.integrate import quad
f_0=lambda x: x
f_1=lambda x: quad(f_0,0,x)[0]
f_2=lambda x: quad(f_1,0,x)[0]
For the sake of completeness, here is a recursive definition for iterating n times:
def integrate_a_lot(f, n):
for _ in range(n):
f = lambda x, f=f: quad(f, 0, x)[0]
return f
f_2 = integrate_a_lot(f_0, 2)
f_42 = integrate_a_lot(f_0, 42)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With