I am writing an algorithm where, given a model, I compute likelihoods for a list of datasets and then need to normalize (to probability) each one of the likelihood. So something like [0.00043, 0.00004, 0.00321] might be converted to may be like [0.2, 0.03, 0.77]. My problem is that the log likelihoods, I am working with, are quite small (for instance, in log space, the values are like -269647.432, -231444.981 etc). In my C++ code, when I try to add two of them (by taking their exponent) I get an answer of "Inf". I tried to add them in log-space (Summation/Subtraction of log), but again stumbled upon the same problem.
Can anybody share his/her expert opinion on this?
Thanks
Assuming the likelihoods have been calculated correctly, you could divide each of them by the largest likelihood. That can be done in logarithm form by subtracting the largest log-likelihood from each log-likelihood.
You can then convert out of logarithm space. The largest will be 1.0, because its normalized log is 0. The smaller ones will each be between 0 and 1.0, and represented as a fraction of the largest.
This is standard procedure. Numerically stable Matlab code:
LL = [ . . . ]; % vector of log-likelihoods
M = max(LL);
LL = LL - M;
L = exp(LL);
L = L ./ sum(L);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With