Looking at .NET System.Collections.Generic.Dictionary<T,T>
implementation I found a method HashHelpers.IsPrime(n) which check if number is prime or not.
I was confused a little bit why they use very simple optimization technique testing only odd numbers starting from 3.
(from source code)
int limit = (int)Math.Sqrt (candidate);
for (int divisor = 3; divisor <= limit; divisor+=2)
{
if ((candidate % divisor) == 0)
return false;
}
return true;
So, they reduce checks in twice from 3 to limit. But more optimal is testing for numbers 6*k-1,6*k+1 according to Wikipedia reducing tests in 3 times. And I think there's even more optimal and faster solutions for primality test.
I understand that in particular Dictionary<T,T>
implementation it's not so important because it is called only for huge dictionaries and in pretty rare cases while resizing. But in general, it's a framework and very popular from well known company. Maybe some logic exists or I don't see something here? Thanks.
To calculate whether a number is prime or not, we have used a for a loop. Within that on every iteration, we use an if statement to find that the remainder is equal to 0, between the number itself. A counter a is also added, which increments only twice if the number is prime i.e. with 1 and the number itself.
Simple methods. The simplest primality test is trial division: given an input number, n, check whether it is evenly divisible by any prime number between 2 and √n (i.e. that the division leaves no remainder). If so, then n is composite. Otherwise, it is prime.
I think there's even more optimal and faster solutions for primality test.
Yep.
Maybe some logic exists or I don't see something here?
Nope. You've accurately summed up the situation.
You ask:
Why in .Net HashHelpers.IsPrime is implemented in a non-optimal way?
and then you answer your own question:
I understand that in particular
Dictionary<T,T>
implementation it's not so important because it is called only for huge dictionaries and in pretty rare cases while resizing.
So you know the answer to your question. It's not optimized because fast enough is by definition fast enough, and the given algorithm is fast enough.
If you want to make it faster, hey, it's open source. Go implement an algorithm you like better and submit the detailed results of your carefully designed, accurate and precise empirical performance tests that clearly demonstrate that making an unnecessary change to a widely used foundational piece of functionality is justified by your superior performance algorithm in an unimportant and rare scenario.
If that sounds like a lot of work for almost no benefit then again, you know the answer to your question. Expensive work that has small benefits gets sorted to the bottom of the priority list.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With