I am trying to calculate log(a + b)
given log(a)
and log(b)
. The problem is, log(a)
and log(b)
are so negative that when I try to calculate a
and b
themselves, they underflow and I get log(0)
, which is undefined.
For log(a * b)
and log(a / b)
, this isn't a problem, since log(a * b) = log(a) + log(b)
and log(a / b) = log(a) - log(b)
. Is there a similar method to calculate log(a + b)
without needing a
and b
themselves, avoiding the underflow?
In short, use the following expression:
fmax(la, lb) + log1p(exp(-fabs(lb - la)))
where I am using la
and lb
as the variables storing log(a)
and log(b)
respectively, and function names are from C's math.h
. Most other languages will also have these functions, but they might be named differently, e.g. abs
and max
instead of fabs
and fmax
. (Note: These are conventions I will use throughout this answer.)
Speaking of functions, Mark Dickinson has a good point: you might want to check whether you have access to one that will do this for you directly. For example, if you are using Python and NumPy, you have access to logaddexp
, and SciPy has logsumexp
.
If you'd like more detail on where the above came from, how to add more than two numbers, and how to subtract, keep reading.
There isn't a rule as simple as the ones for multiplication and division, but there is a mathematical identity that will help:log(a + b) = log(a) + log(1 + b/a)
We can play with that identity a little to get the following:
log(a + b) = log(a) + log(1 + b/a)
= log(a) + log(1 + exp(log(b) - log(a)))
= la + log(1 + exp(lb - la))
There's still a problem here. If la
is much greater than lb
, you'll have 1 + 0.000...000something
inside the log
. There's not enough digits in the floating-point mantissa to store the something
, so you'll just get log(1)
, losing lb
entirely.
Luckily, most programming languages have a function in their standard library for solving just this problem, log1p
, which calculates the logarithm of one plus its argument. That is, log1p(x)
returns log(1 + x)
but in a way that's accurate for very small x
.
So, now we have:
log(a + b) = la + log1p(exp(lb - la))
We are almost there. There's one other thing to consider. In general, you want la
to be greater than lb
. It won't always matter, but sometimes this will gain you extra precision.* If the difference between lb
and la
is really big, this will save you from overflow in exp(lb - la)
. In the most extreme case, the calculation works when lb
is negative infinity (i.e. b
is 0), but not when la
is.
Sometimes, you'll know which one is greater, and you can just use that one as la
. But when it could be either one, you can use maximum and absolute value to work around it:
log(a + b) = fmax(la, lb) + log1p(exp(-fabs(lb - la)))
If you need to take the sum of more than two numbers, we can derive an expanded version of the identity above:
log(a[0] + a[1] + a[2] + ...)
= log(a[0] * (a[0]/a[0] + a[1]/a[0] + a[2]/a[0] + ...))
= log(a[0]) + log(a[0]/a[0] + a[1]/a[0] + a[2]/a[0] + ...)
= la[0] + log(1 + exp(la[1]-la[0]) + exp(la[2]-la[0]) + ...)
We will want to use similar tricks as when we were adding two numbers. That way, we get the most accurate answer and avoid over- and underflow as much as possible. First, log1p
:
log(a[0] + a[1] + a[2] + ...)
= la[0] + log1p(exp(la[1]-la[0]) + exp(la[2]-la[0]) + ...)
The other consideration is which operand to pull out in front of the log1p
. I used la[0]
for demonstration so far, but you want to use whichever one is greatest. This is for all the same reasons we wanted la > lb
when adding two numbers. For example, if la[1]
were greatest, you would do the calculation like this:
log(a[0] + a[1] + a[2] + ...)
= la[1] + log1p(exp(la[0]-la[1]) + exp(la[2]-la[1]) + ...)
Putting this into proper code, it would look something like this (this is C, but it should translate well to other languages):
double log_sum(double la[], int num_elements)
{
// Assume index_of_max() finds the maximum element
// in the array and returns its index
int idx = index_of_max(la, num_elements);
double sum_exp = 0;
for (int i = 0; i < num_elements; i++) {
if (i == idx) {
continue;
}
sum_exp += exp(la[i] - la[idx]);
}
return la[idx] + log1p(sum_exp);
}
This wasn't part of the question, but since it may be useful nonetheless: subtraction in log space can be performed similarly. The basic formula is like this:
log(a - b) = la + log(1 - exp(lb - la))
Note that this is still assuming la
is greater than lb
, but for subtraction, it's even more important. If la
is less than lb
, you are taking the logarithm of a negative number!
Similar to addition, this has an accuracy issue that can be fixed by using specialized functions, but it turns out there are two ways. One uses the same log1p
function as above, but the other uses expm1
, where expm1(x)
returns exp(x) - 1
. Here are both ways:
log(a - b) = la + log1p(-exp(lb - la))
log(a - b) = la + log(-expm1(lb - la))
Which one you should use depends on the value of -(lb - la)
. The first is more accurate when -(lb - la)
is greater than roughly 0.693 (i.e. log(2)
), and the second is more accurate when it's less. For more detail on why that is and where log(2)
came from, see this note from the R project evaluating the two methods.
The end result would look like this:
(lb - la < -0.693) ? la + log1p(-exp(lb - la)) : la + log(-expm1(lb - la))
or in function form:
double log_diff(double la, double lb)
{
if (lb - la < -0.693) {
return la + log1p(-exp(lb - la));
} else {
return la + log(-expm1(lb - la));
}
}
* There is a bit of a sweet spot for this. When the difference between la
and lb
is small, the answer will be accurate either way. When the difference is too big, the result will always equal the greater of the two, since floating point numbers don't have enough precision. But when the difference is just right, you'll get better accuracy when la
is greater.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With