Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

normalizing a list of very small double numbers (likelihoods)

I am writing an algorithm where, given a model, I compute likelihoods for a list of datasets and then need to normalize (to probability) each one of the likelihood. So something like [0.00043, 0.00004, 0.00321] might be converted to may be like [0.2, 0.03, 0.77]. My problem is that the log likelihoods, I am working with, are quite small (for instance, in log space, the values are like -269647.432, -231444.981 etc). In my C++ code, when I try to add two of them (by taking their exponent) I get an answer of "Inf". I tried to add them in log-space (Summation/Subtraction of log), but again stumbled upon the same problem.

Can anybody share his/her expert opinion on this?

Thanks

like image 574
Ikram Ullah Avatar asked Feb 15 '23 16:02

Ikram Ullah


2 Answers

Assuming the likelihoods have been calculated correctly, you could divide each of them by the largest likelihood. That can be done in logarithm form by subtracting the largest log-likelihood from each log-likelihood.

You can then convert out of logarithm space. The largest will be 1.0, because its normalized log is 0. The smaller ones will each be between 0 and 1.0, and represented as a fraction of the largest.

like image 132
Patricia Shanahan Avatar answered Mar 07 '23 03:03

Patricia Shanahan


This is standard procedure. Numerically stable Matlab code:

LL = [ . . . ];  % vector of log-likelihoods
M = max(LL);
LL = LL - M;
L = exp(LL);
L = L ./ sum(L);
like image 37
Timothy Shields Avatar answered Mar 07 '23 02:03

Timothy Shields