Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why are logarithms in computer science presumed to be base-2 logarithms? [closed]

I saw the following in a Computer Science text book,

enter image description here

... so, I was wondering if someone could explain to me why.

like image 297
Andy Avatar asked Mar 03 '17 00:03

Andy


People also ask

Why is log base 2 used in computer science?

Binary logarithm: A logarithm where the base number is 2. Binary logarithms are the basis for the binary numeral system, it allows us to count using zero and one numbers only and they're really important & very common in computer science.

Is log base 2 in computer science?

Binary logarithm is a logarithm with base 2 and is commonly used in computer science.

What is the default base of log in computer science?

The natural logarithm has the number e (that is b ≈ 2.718) as its base; its use is widespread in mathematics and physics, because of its simpler integral and derivative. The binary logarithm uses base 2 (that is b = 2) and is frequently used in computer science.

Why is log base 2 used in time complexity?

In Computer Science, it's often base 2. This is because many divide and conquer algorithms that exhibit this kind of complexity are dividing the problem in two at each step. Binary search is a classic example.


1 Answers

One of the most common ways that logarithms arise in computer science is by repeatedly dividing some array in half, which often occurs with divide-and-conquer algorithms like binary search, mergesort, etc. In those cases, the number of times you can divide an array of length n in half before you get down to single-element arrays is log2 n.

Another very common way logarithms arise is by looking at the bits of a number. Writing out a number in binary uses roughly log2 n bits for the number n. Algorithms like radix sort that sometimes work one bit at a time often give rise to logs like these. Other algorithms like the binary GCD algorithm work by dividing out powers of two and therefore end up with log factors floating around.

Logarithms in physics, math, and other sciences often arise because you're working with continuous processes that grow as a function of time. The natural logarithm comes up in those contexts because the "natural" growth rate of some process over time is modeled by ex (for some definition of "natural" growth rate). But in computer science, exponential growth usually occurs as a consequence of discrete processes like the divide-and-conquer algorithms described above or in manipulation of binary values. Consequently, we typically use log2 n as a logarithmic function, since it just arises so frequently.

This isn't to say that we always use base-two logarithms in CS. For example, the analysis of AVL trees often involves logarithms whose base is the golden ratio φ due to the presence of Fibonacci numbers. Many randomized algorithms do involve e in some way, such as the standard analysis of quicksort, which involves harmonic numbers and thus connects back to natural logarithms. Those are examples of processes where the growth rate is modeled by something else - Fibonacci numbers or the exponential function - and so we opt for different log bases there. It's just that it's sufficiently common to work with binary numbers or to divide things in half that base-two logarithms end up being the default.

In many cases, it doesn't even matter what base you choose. For example, in big-O notation, all logarithms are asymptotically equivalent (they only differ by a multiplicative constant factor), so we usually don't even specify the base when writing something like O(n log n) or O(log n).

like image 72
templatetypedef Avatar answered Nov 09 '22 23:11

templatetypedef