Okay I know this has been asked before with a limited example for scaling [-1, 1]
intervals [a, b]
Different intervals for Gauss-Legendre quadrature in numpy BUT no one has posted how to generalize this for [-a, Infinity]
(as is done below, but not (yet) fast). Also this shows how to call a complex function (in quantitative option pricing anyhow) with several implementations. There is the benchmark quad
code, followed by leggauss
, with links to code examples on how to implement an adaptive algorithm. I have worked through most of the linked adaptive algorithm
difficulties - it currently prints the sum of the divided integral to show it works correctly. Here you will find functions to convert a range from [-1, 1]
to [0, 1]
to [a, Infinity]
(thanks @AlexisClarembeau). To use the adaptive algorithm I had to create another function to convert from [-1, 1]
to [a, b]
which is fed back into the [a, Infinity]
function.
import numpy as np
from scipy.stats import norm, lognorm
from scipy.integrate import quad
a = 0
degrees = 50
flag=-1.0000
F = 1.2075
K = 0.1251
vol = 0.43
T2 = 0.0411
T1 = 0.0047
def integrand(x, flag, F, K, vol, T2, T1):
d1 = (np.log(x / (x+K)) + 0.5 * (vol**2) * (T2-T1)) / (vol * np.sqrt(T2 - T1))
d2 = d1 - vol*np.sqrt(T2 - T1)
mu = np.log(F) - 0.5 *vol **2 * T1
sigma = vol * np.sqrt(T1)
return lognorm.pdf(x, mu, sigma) * (flag * x*norm.cdf(flag * d1) - flag * (x+K)*norm.cdf(flag * d2))
def transform_integral_0_1_to_Infinity(x, a):
return integrand(a+(x/(1-x)), flag, F, K, vol, T2, T1) *(1/(1-x)**2);
def transform_integral_negative1_1_to_0_1(x, a):
return 0.5 * transform_integral_0_1_to_Infinity((x+1)/2, a)
def transform_integral_negative1_1_to_a_b(x, w, a, b):
return np.sum(w*(0.5 * transform_integral_0_1_to_Infinity(((x+1)/2*(b-a)+a), a)))
def adaptive_integration(x, w, a=-1, b=1, lastsplit=False, precision=1e-10):
#split the integral in half assuming [-1, 1] range
midpoint = (a+b)/2
interval1 = transform_integral_negative1_1_to_a_b(x, w, a, midpoint)
interval2 = transform_integral_negative1_1_to_a_b(x, w, midpoint, b)
return interval1+interval2 #just shows this is correct for splitting the interval
def integrate(x, w, a):
return np.sum(w*transform_integral_negative1_1_to_0_1(x, a))
x, w = np.polynomial.legendre.leggauss(degrees)
quadresult = quad(integrand, a, np.Inf, args=(flag, F, K, vol, T2, T1), epsabs=1e-1000)[0]
GL = integrate(x, w, a)
print("Adaptive Sum Result:")
print(adaptive_integration(x, w))
print("GL result");
print(GL)
print("QUAD result")
print(quadresult)
Still need to increase the speed and accuracy with less dimensions as I can't manually adjust the degrees
range for -a
to get convergence. To illustrate why this is a problem, put in these values instead: a=-20
, F=50
, then run. You can increase degrees=1000
and see that there is no benefit to this Gauss-Legendre algorithm if it is not applied intelligently. My requirement for speed is to get to 0.0004s per loop, whereas the last algorithm I Cythonized took about 0.75s, which is why I am trying to use a low degree, high accuracy algorithm with Gauss-Legendre. With Cython and multi-threading this requirement from a completely optimized Python implementation is roughly 0.007s per loop (a non-vectorized, loop ridden, inefficient routine could be 0.1s per loop, with degrees=20
, i.e. %timeit adaptive_integration(x,w)
.
A possible solution which I've half implemented is here http://online.sfsu.edu/meredith/Numerical_Analysis/improper_integrals on pages 5/6, adaptive integration
whereas the interval a-b
(in this case, I wrote the transform_integral_negative1_1_to_a_b
function) where the interval is divided in 2 (@0.5
), the function is then evaluated on these 1/2 intervals, and the sum of the two 0->0.5
+ 0.5->1
are compared to the function results for the whole range 0->1
. If accuracy is not within tolerance, the range is further subdivided at 0.25
and 0.75
, the function is again evaluated for each subinterval, and compared to the prior 1/2 interval sums @0.5
. If 1 side is within tolerance (e.g. abs(0->0.5 - (0->0.25 + 0.25->0.5)) < precision
), but the other side is not, splitting stops on the side within tolerance, but continues on the other side until precision
is reached. At this point the results for each slice of the interval are summed to obtain the full integral with higher accuracy.
There are likely faster and better ways of approaching this problem. I don't care as long as it is fast and accurate. Here is the best description of integration routines I've come across for reference http://orion.math.iastate.edu/keinert/computation_notes/chapter5.pdf Award is 100pts bounty + 15pts for answer acceptance. Thank you for assisting in making this code FAST and ACCURATE!
EDIT:
Here is my change to the adaptive_integration
code - if someone can make this work fast I can accept an answer and award bounty. This Mathematica code on page 7 http://online.sfsu.edu/meredith/Numerical_Analysis/improper_integrals does the routine I attempted. It has work on a routine that doesn't converge well, see the variables below. Right now my code errors out: RecursionError: maximum recursion depth exceeded in comparison
on some inputs, or if the degrees
are set too high, or doesn't get close to the quad
result when it does work, so something is apparently wrong here.
def adaptive_integration(x, w, a, b, integralA2B, remainingIterations, firstIteration, precision=1e-9):
#split the integral in half assuming [-1, 1] range
if remainingIterations == 0:
print('Adaptive integration failed on the interval',a,'->',b)
if np.isnan(integralA2B): return np.nan
midpoint = (a+b)/2
interval1 = transform_integral_negative1_1_to_a_b(x, w, a, midpoint)
interval2 = transform_integral_negative1_1_to_a_b(x, w, midpoint, b)
if np.abs(integralA2B - (interval1 + interval2)) < precision :
return(interval1 + interval2)
else:
return adaptive_integration(x, w, a, midpoint, interval1, (remainingIterations-1), False) + adaptive_integration(x, w, midpoint, b, interval2, (remainingIterations-1), False)
#This example doesn't converge to Quad
# non-converging interval inputs
a = 0 # AND a = -250
degrees = 10
flag= 1
F = 50
K = 0.1251
vol = 0.43
T2 = 0.0411
T1 = 0.0047
print(adaptive_integration(x, w, -1, 1, GL, 500, False))
The output with degrees=100
(after calculating GL
with degrees=10000
for a better initial estimate, otherwise, the algorithm always agrees with its own accuracy apparently and doesn't invoke the adaptive path which fails every time):
GL result:
60.065205169286379
Adaptive Sum Result:
RecursionError: maximum recursion depth exceeded in comparison
QUAD result:
68.72069173210338
I think that code does the job:
import numpy as np
import math
deg = 10
x, w = np.polynomial.legendre.leggauss(deg)
def function(x):
# the function to integrate
return math.exp(-x)
def function2(x, a):
return function(a+x/(1-x))/((1-x)**2);
def anotherOne(x, a):
return 0.5 * function2(x/2 + 1/2, a)
def integrate(deg, a):
sum = 0
x, w = np.polynomial.legendre.leggauss(deg)
for i in range(deg):
print("sum({}) += {} * {} (eval in {})".format(sum, w[i], anotherOne(x[i], a), x[i]))
sum += w[i]*anotherOne(x[i], a)
return sum;
print("result");
print(integrate(10, 1))
It combines your equation to integrate from a to inf and the equation to change the bounds of an integral.
I hope it solves your problem (it works for exp(-x) at least) :)
If you want an inline computation, the program does the sum of:
It's a combination of:
And:
And:
In "Numerical Programming: A Practical Guide for Scientists and Engineers Using Python and C/C++" by Titus A. Beu, you can find the methods in the code samples integral.py
and specfunc.py
here: http://phys.ubbcluj.ro/~tbeu/INP/libraries.html You call the function xGaussLag(a, deg)
which calls Laguerre
from the other .py file and returns your adjusted (x,w)
between a
and infinity
. Here's how to set this up (note just above deg=80
it is very slow, I'm just showing you how to apply it by modifying lines above):
x, w = np.array(xGaussLag(a,deg))
gauss = sum(w * integrand(x, flag, F, K, vol, T2, T1))
Gets pretty close convergence on deg=80
(faster) but I just put the eps=1e-13
in xGaussLag
and pushed the deg=150
with these results, nonetheless faster than quad
by 33%:
The QUADPACK solution: 0.149221620346 with error: 1.49870924498e-12 Gauss-Legendre solution: 0.149238273747 Difference between QUADPACK and Gauss-Legendre: 1.66534003601e-05
In Cython this is 6x faster than straight Python BTW still too slow so I'm going to try the "FastGL" package with the answer from @Alexis for now, just posting as I think this will be useful for other SO users in the future.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With