Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Latent Dirichlet Allocation, pitfalls, tips and programs

I'm experimenting with Latent Dirichlet Allocation for topic disambiguation and assignment, and I'm looking for advice.

  1. Which program is the "best", where best is some combination of easiest to use, best prior estimation, fast
  2. How do I incorporate my intuitions about topicality. Let's say I think I know that some items in the corpus are really in the same category, like all articles by the same author. Can I add that into the analysis?
  3. Any unexpected pitfalls or tips I should know before embarking?

I'd prefer is there are R or Python front ends for whatever program, but I expect (and accept) that I'll be dealing with C.

like image 593
Gregg Lind Avatar asked Oct 10 '08 13:10

Gregg Lind


4 Answers

  1. You mentioned a preference for R, you can use two packages topicmodels (slow) or lda (fast). Python has deltaLDA, pyLDA, Gensim, etc.

  2. Topic modeling with specified topics or words is tricky out-of-the-box, David Andrzejewski has some Python code that seems to do it. There is a C++ implementation of supervised LDA here. And plenty of papers on related approaches (DiscLDA, Labeled LDA but not in an easy-to-use form, for me anyway...

  3. As @adi92 says, removing stopwords, white spaces, numbers, punctuation and stemming all improve things a lot. One possible pitfall is having the wrong (or an inappropriate) number of topics. Currently there are no straightforward diagnostics for how many topics are optimum for a coprus of a give size, etc. There are some measures of topic quality available in MALLET (fastest), which are very handy.

like image 30
Ben Avatar answered Sep 18 '22 00:09

Ben


  1. http://mallet.cs.umass.edu/ is IMHO the most awesome plug-n-play LDA package out there.. It uses Gibbs sampling to estimate topics and has a really straightforward command-line interface with a lot of extra bells-n-whistles (a few more complicated models, hyper-parameter optimization, etc)

  2. Its best to let the algorithm do its job. There may be variants of LDA (and pLSI,etc) which let you do some sort of semi-supervised thing.. I don't know of any at the moment.

  3. I found removing stop-words and other really high-frequency words seemed to improve the quality of my topics a lot (evaluated by looking at top words of each topic, not any rigorous metric).. I am guessing stemming/lemmatization would help as well.

like image 113
Aditya Mukherji Avatar answered Sep 18 '22 00:09

Aditya Mukherji


In addition to the usual sources, it seems like the most active area talking about this is on the topics-models listserv. From my initial survey, the easiest package to understand is the LDA Matlab package.

This is not lightweight stuff at all, so I'm not surprised it's hard to find good resources on it.

like image 20
Gregg Lind Avatar answered Sep 19 '22 00:09

Gregg Lind


For this kind of analysis I have used LingPipe: http://alias-i.com/lingpipe/index.html. It is an open source Java library, parts of which I use directly or port. To incorporate your own data, you may use a classifier, such as naive bayes, in conjunction. my experiences with statistical nlp is limited, but it usually follows a cycle of setting up classifiers, training, and looking over results, tweaking.

like image 24
eulerfx Avatar answered Sep 18 '22 00:09

eulerfx