From Wikipedia:
The complexity of the algorithm is
O(n(logn)(loglogn))
bit operations.
How do you arrive at that?
That the complexity includes the loglogn
term tells me that there is a sqrt(n)
somewhere.
Suppose I am running the sieve on the first 100 numbers (n = 100
), assuming that marking the numbers as composite takes constant time (array implementation), the number of times we use mark_composite()
would be something like
n/2 + n/3 + n/5 + n/7 + ... + n/97 = O(n^2)
And to find the next prime number (for example to jump to 7
after crossing out all the numbers that are multiples of 5
), the number of operations would be O(n)
.
So, the complexity would be O(n^3)
. Do you agree?
The classical Sieve of Eratosthenes algorithm takes O(N log (log N)) time to find all prime numbers less than N. In this article, a modified Sieve is discussed that works in O(N) time. Below is the implementation of the above idea.
Since any number has at most one prime factor exceeding sqrt(n) , the difference isn't so large, it has no influence on complexity, but there are a lot of numbers with only two prime factors (or three with one larger than sqrt(n) ), thus it makes a noticeable difference in running time.
Time Complexity: O(sqrt n) because the loop runs from 2 to sqrt(n).
Sieve of Eratosthenes is a simple and ancient algorithm used to find the prime numbers up to any given limit. It is one of the most efficient ways to find small prime numbers.
Your n/2 + n/3 + n/5 + … n/97 is not O(n), because the number of terms is not constant. [Edit after your edit: O(n2) is too loose an upper bound.] A loose upper-bound is n(1+1/2+1/3+1/4+1/5+1/6+…1/n) (sum of reciprocals of all numbers up to n), which is O(n log n): see Harmonic number. A more proper upper-bound is n(1/2 + 1/3 + 1/5 + 1/7 + …), that is sum of reciprocals of primes up to n, which is O(n log log n). (See here or here.)
The "find the next prime number" bit is only O(n) overall, amortized — you will move ahead to find the next number only n times in total, not per step. So this whole part of the algorithm takes only O(n).
So using these two you get an upper bound of O(n log log n) + O(n) = O(n log log n) arithmetic operations. If you count bit operations, since you're dealing with numbers up to n, they have about log n bits, which is where the factor of log n comes in, giving O(n log n log log n) bit operations.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With