Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Indexing token bigrams in Lucene

I need to index bi-grams of words (tokens) in Lucene. I can produce n-grams and than index them, but I am wondering if there is something in Lucene which will do this for me. I found out that Lucene indexes only n-gram of chars. Any ideas?

like image 444
Ilija Avatar asked Mar 17 '09 13:03

Ilija


2 Answers

Use the NGramTokenizer:

http://lucene.apache.org/java/2_3_2/api/contrib-analyzers/org/apache/lucene/analysis/ngram/NGramTokenizer.html

like image 185
bajafresh4life Avatar answered Oct 11 '22 12:10

bajafresh4life


The class that you are looking for is the ShingleFilter: http://lucene.apache.org/java/2_4_0/api/org/apache/lucene/analysis/shingle/ShingleFilter.html

like image 20
DerHeiligste Avatar answered Oct 11 '22 13:10

DerHeiligste