Have anybody compared these stemmers from Lucene (package org.tartarus.snowball.ext): EnglishStemmer, PorterStemmer, LovinsStemmer? What are the strong/weak points of algorithms behind them? When each of them should be used? Or maybe there are some more algorithms available for english words stemming?
Thanks.
The Lovins stemmer is a very old algorithm that is not of much practical use, since the Porter stemmer is much stronger. Based on some quick skimming of the source code, it seems PorterStemmer
implements Porter's original (1980) algorithm, while EnglishStemmer
implements his updated version, which should be better.
A stronger stemming algorithm (actually a lemmatizer) is available in the Stanford NLP tools. A Lucene-Stanford NLP by yours truly bridge is available here (API docs).
See also Manning, Raghavan & Schütze for general info about stemming and lemmatization.
I've tested the 3 Lucene stemmers available from org.apache.lucene.analysis.en
version 4.4.0, which are EnglishMinimalStemFilter
, KStemFilter
and PorterStemFilter
, in a document classification problem I'm working on. My results corroborate the claims made by the authors of Introduction to Information Retrieval that for small training corpora in document classification settings stemming is harmful, and for large corpora stemming makes no difference.
For search and indexing, stemming can be more useful (see, e.g., Jenkins & Smith), but even there the answer to your question depends on the details of what you're doing. There is no free lunch!
At the end of the day, nothing beats empirical tests of real code on real data. The only way you'll really know which is better is by running the stemmers for yourself in your application.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With