Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Calculate maximum likelihood using PyMC3

There are cases when I'm not actually interested in the full posterior of a Bayesian inference, but simply the maximum likelihood (or maximum a posterior for suitably chosen priors), and possibly it's Hessian. PyMC3 has functions to do that, but find_MAP seems to return the model parameters in transformed form depending on the prior distribution on them. Is there an easy way to get the untransformed values from these? The output of find_hessian is even less clear to me, but it's most likely in the transformed space too.

like image 831
burnpanck Avatar asked Nov 20 '16 12:11

burnpanck


2 Answers

May be the simpler solution will be to pass the argument transform=None, to avoid PyMC3 doing the transformation and then using find_MAP

I let you and example for a simple model.

data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
    p = pm.Uniform('p', 0, 1, transform=None)
    w = pm.Binomial('w', n=len(data), p=p, observed=data.sum())
    mean_q = pm.find_MAP()
    std_q = ((1/pm.find_hessian(mean_q))**0.5)[0]
print(mean_q['p'], std_q)

Have you considered using ADVI?

like image 78
aloctavodia Avatar answered Nov 19 '22 10:11

aloctavodia


I came across this once more and found a way to get the untransformed values from the transformed ones. Just in case somebody else needs this as-well. The gist of it is that the untransformed values are essentially theano expressions that can be evaluated given the transformed values. PyMC3 helps here a little by providing the Model.fn() function which creates such an evaluation function accepting values by name. Now you only need to supply the untransformed variables of interest to the outs argument. A complete example:

data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
    p = pm.Uniform('p', 0, 1)
    w = pm.Binomial('w', n=len(data), p=p, observed=data.sum())
    map_estimate = pm.find_MAP()
# create a function that evaluates p, given the transformed values
evalfun = normal_aproximation.fn(outs=p)
# create name:value mappings for the free variables (e.g. transformed values)
inp = {v:map_estimate[v.name] for v in model.free_RVs}
# now use that input mapping to evaluate p
p_estimate = evalfun(inp) 

outs can also receive a list of variables, evalfun will then output the values of the corresponding variables in the same order.

like image 39
burnpanck Avatar answered Nov 19 '22 10:11

burnpanck