The ldamodel in gensim has the two methods: get_document_topics
and get_term_topics
.
Despite their use in this gensim tutorial notebook, I do not fully understand how to interpret the output of get_term_topics
and created the self-contained code below to show what I mean:
from gensim import corpora, models
texts = [['human', 'interface', 'computer'],
['survey', 'user', 'computer', 'system', 'response', 'time'],
['eps', 'user', 'interface', 'system'],
['system', 'human', 'system', 'eps'],
['user', 'response', 'time'],
['trees'],
['graph', 'trees'],
['graph', 'minors', 'trees'],
['graph', 'minors', 'survey']]
# build the corpus, dict and train the model
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
model = models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=2,
random_state=0, chunksize=2, passes=10)
# show the topics
topics = model.show_topics()
for topic in topics:
print topic
### (0, u'0.159*"system" + 0.137*"user" + 0.102*"response" + 0.102*"time" + 0.099*"eps" + 0.090*"human" + 0.090*"interface" + 0.080*"computer" + 0.052*"survey" + 0.030*"minors"')
### (1, u'0.267*"graph" + 0.216*"minors" + 0.167*"survey" + 0.163*"trees" + 0.024*"time" + 0.024*"response" + 0.024*"eps" + 0.023*"user" + 0.023*"system" + 0.023*"computer"')
# get_document_topics for a document with a single token 'user'
text = ["user"]
bow = dictionary.doc2bow(text)
print "get_document_topics", model.get_document_topics(bow)
### get_document_topics [(0, 0.74568415806946331), (1, 0.25431584193053675)]
# get_term_topics for the token user
print "get_term_topics: ", model.get_term_topics("user", minimum_probability=0.000001)
### get_term_topics: [(0, 0.1124525558321441), (1, 0.006876306738765027)]
For get_document_topics
, the output makes sense. The two probabilities add up to 1.0, and the topic where user
has a higher-probability (from model.show_topics()
) has also the higher probability assigned.
But for get_term_topics
, there are questions:
user
has a higher-probability (from model.show_topics()
) has also a higher number assigned, what does this number mean?get_term_topics
at all, when get_document_topics
can provide (seemingly) the same functionality and has meaningful output?I was working on LDA Topic Modeling and came across this post. I did create two topics let say topic1 and topic2.
The top 10 words for each topic are as follows:
0.009*"would" + 0.008*"experi" + 0.008*"need" + 0.007*"like" + 0.007*"code" + 0.007*"work" + 0.006*"think" + 0.006*"make" + 0.006*"one" + 0.006*"get
0.027*"ierr" + 0.018*"line" + 0.014*"0.0e+00" + 0.010*"error" + 0.009*"defin" + 0.009*"norm" + 0.006*"call" + 0.005*"type" + 0.005*"de" + 0.005*"warn
Eventually, I took 1 document for determining the closest topic.
for d in doc:
bow = dictionary.doc2bow(d.split())
t = lda.get_document_topics(bow)
and the output is [(0, 0.88935698141006414), (1, 0.1106430185899358)]
.
To answer your first question, the probabilities do add up to 1.0 for a document and that is what get_document_topics does. The document clearly states that it returns topic distribution for the given document bow, as a list of (topic_id, topic_probability) 2-tuples.
Further, I tried to get_term_topics for keyword "ierr"
t = lda.get_term_topics("ierr", minimum_probability=0.000001)
and the result is [(1, 0.027292299843400435)]
which is nothing but the word contribution for determining each topic, which makes sense.
So, you can label the document based on the topic distribution you get using get_document_topics and you can determine the importance of the word based on the contribution given by get_term_topics.
I hope this helps.
Be careful that both functions have a minimum_probability
optional argument. If you don't set this option the probabilities won't sum to 1, because low frequencies will be discarded.
Here is the signature of the functions:
get_term_topics(word_id, minimum_probability=None)
get_document_topics(bow, minimum_probability=None, minimum_phi_value=None, per_word_topics=False)
As for the use of each function, as stated by the documentation:
get_term_topics
: returns the most relevant topics to the given word.get_document_topics
: returns the topic distribution for the given document.If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With