Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to print the LDA topics models from gensim? Python

Using gensim I was able to extract topics from a set of documents in LSA but how do I access the topics generated from the LDA models?

When printing the lda.print_topics(10) the code gave the following error because print_topics() return a NoneType:

Traceback (most recent call last):   File "/home/alvas/workspace/XLINGTOP/xlingtop.py", line 93, in <module>     for top in lda.print_topics(2): TypeError: 'NoneType' object is not iterable 

The code:

from gensim import corpora, models, similarities from gensim.models import hdpmodel, ldamodel from itertools import izip  documents = ["Human machine interface for lab abc computer applications",               "A survey of user opinion of computer system response time",               "The EPS user interface management system",               "System and human system engineering testing of EPS",               "Relation of user perceived response time to error measurement",               "The generation of random binary unordered trees",               "The intersection graph of paths in trees",               "Graph minors IV Widths of trees and well quasi ordering",               "Graph minors A survey"]  # remove common words and tokenize stoplist = set('for a of the and to in'.split()) texts = [[word for word in document.lower().split() if word not in stoplist]          for document in documents]  # remove words that appear only once all_tokens = sum(texts, []) tokens_once = set(word for word in set(all_tokens) if all_tokens.count(word) == 1) texts = [[word for word in text if word not in tokens_once]          for text in texts]  dictionary = corpora.Dictionary(texts) corpus = [dictionary.doc2bow(text) for text in texts]  # I can print out the topics for LSA lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=2) corpus_lsi = lsi[corpus]  for l,t in izip(corpus_lsi,corpus):   print l,"#",t print for top in lsi.print_topics(2):   print top  # I can print out the documents and which is the most probable topics for each doc. lda = ldamodel.LdaModel(corpus, id2word=dictionary, num_topics=50) corpus_lda = lda[corpus]  for l,t in izip(corpus_lda,corpus):   print l,"#",t print  # But I am unable to print out the topics, how should i do it? for top in lda.print_topics(10):   print top 
like image 401
alvas Avatar asked Feb 22 '13 02:02

alvas


People also ask

What is Gensim LDA?

Latent Dirichlet Allocation(LDA) is a popular algorithm for topic modeling with excellent implementations in the Python's Gensim package. The challenge, however, is how to extract good quality of topics that are clear, segregated and meaningful.

What are the two main inputs to an LDA topic model using Gensim?

The two main inputs to the LDA topic model are the dictionary(id2word) and the corpus. Let's create them. Gensim creates a unique id for each word in the document.

What is the output of topic Modelling?

Model outputs For PLSA and LDA, the outputs of the model include two matrices: one is the topic probability distributions over documents, represented by an N × K matrix; the other is the word probability distributions over topics, represented by a K × V matrix.


2 Answers

After some messing around, it seems like print_topics(numoftopics) for the ldamodel has some bug. So my workaround is to use print_topic(topicid):

>>> print lda.print_topics() None >>> for i in range(0, lda.num_topics-1): >>>  print lda.print_topic(i) 0.083*response + 0.083*interface + 0.083*time + 0.083*human + 0.083*user + 0.083*survey + 0.083*computer + 0.083*eps + 0.083*trees + 0.083*system ... 
like image 145
alvas Avatar answered Oct 15 '22 00:10

alvas


I think syntax of show_topics has changed over time:

show_topics(num_topics=10, num_words=10, log=False, formatted=True) 

For num_topics number of topics, return num_words most significant words (10 words per topic, by default).

The topics are returned as a list – a list of strings if formatted is True, or a list of (probability, word) 2-tuples if False.

If log is True, also output this result to log.

Unlike LSA, there is no natural ordering between the topics in LDA. The returned num_topics <= self.num_topics subset of all topics is therefore arbitrary and may change between two LDA training runs.

like image 22
user2597000 Avatar answered Oct 15 '22 00:10

user2597000