Below you will find my python code for a class assignment I was given a couple weeks ago which I have been unable to successfully debug. The problem is about finding the value at risk (i.e., the p% quantile) for an aggregate loss random variable, using FFT. We are given a clear mathematical procedure by which we can gain an estimation of the discretized CDF of the aggregate loss random variable. My results are, however, seriously off and I am making some kind of mistake which I have been unable to find even after hours of debugging my code.
The aggregate loss random variable S is given such that S=sum(X_i for i in range(N)), where N is negative binomially distributed with r=5, beta=.2, and X_i is exponentially distributed with theta=1. The probability generating function for this parametrization is P(z)=[1-\beta(z-1)]^{-r}.
We were asked to approximate the distribution of S by
h and an integer n such that r=2^n is the number of elements to discretize X on,X and calculating the probabilities of being in equally spaced intervals of width h,X,N to the elements of the Fourier-transformed X,The resulting vector should be an approximation for the probability masses of each such interval for S. I know from previous methods that the 95% VaR ought to be ~4 and the 99.9% VaR ought to be ~10. But my code returns nonsensical results. Generally speaking, my index where the ECDF reaches levels >0.95 is way too late, and even after hours of debugging I have not managed to find where I am going wrong.
I have also asked this question on the math stackexchange, since this question is very much on the intersection of programming and math and I have no idea at this moment whether the issue is on the implementation side of things or whether I am applying the mathematical ideas wrong.
import numpy as np
from scipy.stats import expon
from scipy.fft import fft, ifft
r, beta, theta = 5, .2, 1
var_levels = [.95, .999]
def discretize_X(h: float, m: int):
X = expon(scale=theta)
f_X = [X.cdf(h / 2),
*[X.cdf(j * h + h / 2) - X.cdf(j * h - h / 2) for j in range(1, m - 1)],
X.sf((m - 1) * h - h / 2)]
return f_X
# Probability generating function of N ~ NB(r, beta)
def PGF(z: [float, complex]):
return (1 - beta * (z - 1)) ** (-r)
h = 1e-2
n = 10
r = 2 ** n
VaRs, TVaRs = [], []
# discretize X with (r-1) cells of width h and one final cell with the survival function at h*(r-1)
f_X = discretize_X(h, r)
phi_vec = fft(f_X)
f_tilde_vec_fft = np.array([PGF(phi) for phi in phi_vec])
f_S = np.real(ifft(f_tilde_vec_fft))
ecdf_S = np.cumsum(f_S) # calc cumsum to get ECDF
for p in var_levels:
var_idx = np.where(ecdf_S >= p)[0][0] # get lowest index where ecdf_S >= p
print("p =", p, "\nVaR idx:", var_idx)
var = h * var_idx # VaR should be this index times the cell width
print("VaR:", var)
tvar = 1 / (1 - p) * np.sum(f_S[var_idx:] * np.array([i * h for i in range(var_idx, r)])) # TVaR should be each cell's probability times the value inside that cell
VaRs.append(var)
TVaRs.append(tvar)
return VaRs, TVaRs
Not sure about math, but in snippet variable r gets overrided, and when computing f_tilde_vec_fft function PGF uses not 5 as expected for r, but 1024. Fix -- change name r to r_nb in definition of hyperparameters:
r_nb, beta, theta = 5, .2, 1
and also in function PGF:
return (1 - beta * (z - 1)) ** (-r_nb)
After run with other parameters remain same (such as h, n etc.) for VaRs I get [4.05, 9.06]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With