I have attempted an exercise from the computational physics written by Newman and written the following code for an adaptive trapezoidal rule. When the error estimate of each slide is larger than the permitted value, it divides that portion into two halves. I am just wondering what else I can do to make the algorithm more efficient.
xm=[]
def trap_adapt(f,a,b,epsilon=1.0e-8):
def step(x1,x2,f1,f2):
xm = (x1+x2)/2.0
fm = f(xm)
h1 = x2-x1
h2 = h1/2.0
I1 = (f1+f2)*h1/2.0
I2 = (f1+2*fm+f2)*h2/2.0
error = abs((I2-I1)/3.0) # leading term in the error expression
if error <= h2*delta:
points.append(xm) # add the points to the list to check if it is really using more points for more rapid-varying regions
return h2/3*(f1 + 4*fm + f2)
else:
return step(x1,xm,f1,fm)+step(xm,x2,fm,f2)
delta = epsilon/(b-a)
fa, fb = f(a), f(b)
return step(a,b,fa,fb)
Besides, I used a few simple formula to compare this to Romberg integration, and found that for the same accuracy, this adaptive method uses many more point to calculate the integral.
Is it just because of its inherent limitations? Any advantages of using this adaptive algorithm instead of the Romberg method? any ways to make it faster and more accurate?
Your code is refining to meet an error tolerance in each individual subinterval. It's also using a low-order integration rule. Improvements in both of these can significantly reduce the number of function evaluations.
Rather than considering the error in each subinterval separately, more advanced codes compute the total error over all the subintervals and refine until the total error is below the desired threshold. Subintervals are chosen for refinement according to their contribution to the total error, with larger errors being refined first. Typically a priority queue is used to quickly chose the subinterval for refinement.
Higher-order integration rules can integrate more complicated functions exactly. For example, your code is based on Simpson's rule, which is exact for polynomials of degree up to 3. A more advanced code will probably use a rule that's exact for polynomials of much higher degree (say 10-15).
From a practical point of view, the simplest thing is to use a canned routine that implements the above ideas, e.g., scipy.integrate.quad. Unless you have particular knowledge of what you want to integrate, you're unlikely to do better.
Romberg integration requires evaluation at equally-spaced points. If you can evaluate the function at any point, then other methods are generally more accurate for "smooth" (polynomial-like) functions. And if your function is not smooth everywhere, then an adaptive code will do much better because it can focus on beating down the error in the non-smooth regions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With