Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best data structures for searching millions of filenames? [duplicate]

Possible Duplicate:
Build an index for substring search?

I'm developing a filename search tool. I'd like to search a harddrive (or multiple harddrives) for, perhaps, millions of filenames.

Given the file: application 3 - jack smithinson

Searches:

  1. 'application', '3', 'jack', 'smithinson'
  2. 'smith'
  3. 'inson'

Should all return this file.

What are the best data structures for this kind of operation and why?

  1. Binary tree.
  2. Trie.
  3. SQLite Database, of filenames
  4. More?
like image 754
Jason Avatar asked Feb 23 '23 10:02

Jason


1 Answers

Store these file names in Lucene indexes. You can find more information here http://incubator.apache.org/lucene.net/ Lucene lets you create highly optimized indexes for search. Yahoo has used it for years for their web search engine. It offers an abstract way to create indexes without worrying about the internal implementation. It's as easy as creating an xml document in memory and then serialize it to disk

like image 82
Sap Avatar answered May 06 '23 09:05

Sap