Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to improve performance of this algorithm?

I have a text file with 100000 pairs: word and frequency.

test.in file with words:

  • 1 line - total count of all word-frequency pairs
  • 2 line to ~100 001 - word-frequency pairs
  • 100 002 line - total count of user input words
  • from 100 003 to the end - user input words

I parse this file and put the words in

Dictionary<string,double> dictionary;

And I want to execute some search + order logic in the following code:

for(int i=0;i<15000;i++)
{
    tempInputWord = //take data from file(or other sources)

    var adviceWords = dictionary
                .Where(p => p.Key.StartsWith(searchWord, StringComparison.Ordinal))
                .OrderByDescending(ks => ks.Value)
                .ThenBy(ks => ks.Key,StringComparer.Ordinal)
                .Take(10)
                .ToList();

    //some output
}

The problem: This code must run in less than 10 seconds.

On my computer (core i5 2400, 8gb RAM) with Parallel.For() - about 91 sec.

Can you give me some advice how to increase performance?

UPDATE :

Hooray! We did it! Thank you @CodesInChaos, @usr, @T_D and everyone who was involved in solving the problem.

The final code:

var kvList = dictionary.OrderBy(ks => ks.Key, StringComparer.Ordinal).ToList();

var strComparer = new MyStringComparer();
var intComparer = new MyIntComparer();
var kvListSize = kvList.Count;
var allUserWords = new List<string>();
for (int i = 0; i < userWordQuantity; i++)
{
    var searchWord = Console.ReadLine();
    allUserWords.Add(searchWord);
}
var result =  allUserWords
    .AsParallel()
    .AsOrdered()
    .Select(searchWord =>
    {
        int startIndex = kvList.BinarySearch(new KeyValuePair<string, int>(searchWord, 0), strComparer);
        if (startIndex < 0)
            startIndex = ~startIndex;

        var matches = new List<KeyValuePair<string, int>>();

        bool isNotEnd = true;
        for (int j = startIndex; j < kvListSize ; j++)
        {
            isNotEnd = kvList[j].Key.StartsWith(searchWord, StringComparison.Ordinal);
            if (isNotEnd) matches.Add(kvList[j]);
            else break;
        }
        matches.Sort(intComparer);

        var res = matches.Select(s => s.Key).Take(10).ToList();

        return res;
    });
foreach (var adviceWords in result)
{
   foreach (var adviceWord in adviceWords)
   {
       Console.WriteLine(adviceWord);
   }
   Console.WriteLine();
}

6 sec (9 sec without manual loop (with linq)))

like image 916
chromigo Avatar asked Oct 01 '14 09:10

chromigo


4 Answers

You are not at all using any algorithmic strength of the dictionary. Ideally, you'd use a tree structure so that you can perform prefix lookups. On the other hand you are within 3.7x of your performance goal. I think you can reach that by just optimizing the constant factor in your algorithm.

  1. Don't use LINQ in perf-critical code. Manually loop over all collections and collect results into a List<T>. That turns out to give a major speed-up in practice.
  2. Don't use a dictionary at all. Just use a KeyValuePair<T1, T2>[] and run through it using a foreach loop. This is the fastest possible way to traverse a set of pairs.

Could look like this:

KeyValuePair<T1, T2>[] items;
List<KeyValuePair<T1, T2>> matches = new ...(); //Consider pre-sizing this.

//This could be a parallel loop as well.
//Make sure to not synchronize too much on matches.
//If there tend to be few matches a lock will be fine.
foreach (var item in items) {
 if (IsMatch(item)) {
  matches.Add(item);
 }
}

matches.Sort(...); //Sort in-place

return matches.Take(10); //Maybe matches.RemoveRange(10, matches.Count - 10) is better

That should exceed a 3.7x speedup.

If you need more, try stuffing the items into a dictionary keyed on the first char of Key. That way you can look up all items matching tempInputWord[0]. That should reduce search times by the selectivity that is in the first char of tempInputWord. For English text that would be on the order of 26 or 52. This is a primitive form of prefix lookup that has one level of lookup. Not pretty but maybe it is enough.

like image 52
usr Avatar answered Sep 18 '22 18:09

usr


I think the best way would be to use a Trie data structure instead of a dictionary. A Trie data structure saves all the words in a tree structure. A node can represent all the words that start with the same letters. So if you look for your search word tempInputWord in a Trie you will get a node that represents all the words starting with tempInputWord and you just have to traverse through all the child nodes. So you just have one search operation. The link to the Wikipedia article also mentions some other advantages over hash tables (that's what an Dictionary is basically):

  • Looking up data in a trie is faster in the worst case, O(m) time (where m is the length of a search string), compared to an imperfect hash table. An imperfect hash table can have key collisions. A key collision is the hash function mapping of different keys to the same position in a hash table. The worst-case lookup speed in an imperfect hash table is O(N) time, but far more typically is O(1), with O(m) time spent evaluating the hash.
  • There are no collisions of different keys in a trie.
  • Buckets in a trie, which are analogous to hash table buckets that store key collisions, are necessary only if a single key is associated with more than one value.
  • There is no need to provide a hash function or to change hash functions as more keys are added to a trie.
  • A trie can provide an alphabetical ordering of the entries by key.

And here are some ideas for creating a trie in C#.

This should at least speed up the lookup, however, building the Trie might be slower.

Update: Ok, I tested it myself using a file with frequencies of english words that uses the same format as yours. This is my code which uses the Trie class that you also tried to use.

    static void Main(string[] args)
    {
        Stopwatch sw = new Stopwatch();

        sw.Start();
        var trie = new Trie<KeyValuePair<string,int>>();

        //build trie with your value pairs
        var lines = File.ReadLines("en.txt");
        foreach(var line in lines.Take(100000))
        {
            var split = line.Split(' ');
            trie.Add(split[0], new KeyValuePair<string,int>(split[0], int.Parse(split[1])));
        }

        Console.WriteLine("Time needed to read file and build Trie with 100000 words: " + sw.Elapsed);

        sw.Reset();

        //test with 10000 search words
        sw.Start();
        foreach (string line in lines.Take(10000))
        {
            var searchWord = line.Split(' ')[0];
            var allPairs = trie.Retrieve(searchWord);
            var bestWords = allPairs.OrderByDescending(kv => kv.Value).ThenBy(kv => kv.Key).Select(kv => kv.Key).Take(10);

            var output = bestWords.Aggregate("", (s1, s2) => s1 + ", " + s2);
            Console.WriteLine(output);

        }

        Console.WriteLine("Time to process 10000 different searchWords: " + sw.Elapsed);
    }

My results on a pretty similar machine:

Time needed to read file and build Trie with 100000 words: 00:00:00.7397839
Time to process 10000 different searchWords: 00:00:03.0181700

So I think you are doing something wrong that we cannot see. For example the way you measure the time or the way you read the file. As my results show this stuff should be really fast. The 3 seconds are mainly due to the Console output in the loop which I needed so that the bestWords variable is used. Otherwise the variable would have been optimized away.

like image 28
T_D Avatar answered Sep 21 '22 18:09

T_D


  • Replace the dictionary by a List<KeyValuePair<string, decimal>>, sorted by the key.

    For the search I use that a substring sorts directly before its prefixes with ordinal comparisons. So I can use a binary search to find the first candidate. Since the candidates are contiguous I can replace Where with TakeWhile.

    int startIndex = dictionary.BinarySearch(searchWord, comparer);
    if(startIndex < 0)
        startIndex = ~startIndex;
    
    var adviceWords = dictionary
                .Skip(startIndex)
                .TakeWhile(p => p.Key.StartsWith(searchWord, StringComparison.Ordinal))
                .OrderByDescending(ks => ks.Value)
                .ThenBy(ks => ks.Key)
                .Select(s => s.Key)
                .Take(10).ToList();
    
  • Make sure to use ordinal comparison for all operations, including the initial sort, the binary search and the StartsWith check.

  • I would call Console.ReadLine outside the parallel loop. Probably using AsParallel().Select(...) on the collection of search words instead of Parallel.For.
like image 36
CodesInChaos Avatar answered Sep 20 '22 18:09

CodesInChaos


If you want profiling, separate the reading of the file and see how long that takes. Also data calculation, collection, presentation could be different steps.

If you want concurrence AND a dictionary, look at the ConcurrentDictionary, maybe even more for reliability than for performance, but probably for both:

http://msdn.microsoft.com/en-us/library/dd287191(v=vs.110).aspx

like image 21
Pieter21 Avatar answered Sep 18 '22 18:09

Pieter21