Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Getting topic-word distribution from LDA in scikit learn

I was wondering if there is a method in the LDA implementation of scikit learn that returns the topic-word distribution. Like the genism show_topics() method. I checked the documentation but didn't find anything.

like image 937
Niro Avatar asked May 26 '17 18:05

Niro


People also ask

Can you use TF IDF with LDA?

Not to disagree with Jérôme's answer, tf-idf is used in the latent dirichlet allocation to some extent. As can be read in the paper Topic Models by Blei and Lafferty (e.g. p.6 - Visualizing Topics and p. 12), the tf-idf score can be very useful for LDA. It can be used to visualize topics or to chose the vocabulary.

How do I know how many topics in LDA?

To decide on a suitable number of topics, you can compare the goodness-of-fit of LDA models fit with varying numbers of topics. You can evaluate the goodness-of-fit of an LDA model by calculating the perplexity of a held-out set of documents. The perplexity indicates how well the model describes a set of documents.

What is Max_iter in LDA?

Number of Topics: n_components is the number of topics to find from the corpus. The number of maximum iterations: max_iter: It is the number of maximum iterations allowed for the LDA algorithm to converge.


1 Answers

Take a look at sklearn.decomposition.LatentDirichletAllocation.components_:

components_ : array, [n_topics, n_features]

Topic word distribution. components_[i, j] represents word j in topic i.

Here's a minimal example:

import numpy as np
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.feature_extraction.text import CountVectorizer

data = ['blah blah foo bar', 'foo foo foo foo bar', 'bar bar bar bar foo',
        'foo bar bar bar baz foo', 'foo foo foo bar baz', 'blah banana', 
        'cookies candy', 'more text please', 'hey there are more words here',
        'bananas', 'i am a real boy', 'boy', 'girl']

vectorizer = CountVectorizer()
X = vectorizer.fit_transform(data)

vocab = vectorizer.get_feature_names()

n_top_words = 5
k = 2

model = LatentDirichletAllocation(n_topics=k, random_state=100)

id_topic = model.fit_transform(X)

topic_words = {}

for topic, comp in enumerate(model.components_):
    # for the n-dimensional array "arr":
    # argsort() returns a ranked n-dimensional array of arr, call it "ranked_array"
    # which contains the indices that would sort arr in a descending fashion
    # for the ith element in ranked_array, ranked_array[i] represents the index of the
    # element in arr that should be at the ith index in ranked_array
    # ex. arr = [3,7,1,0,3,6]
    # np.argsort(arr) -> [3, 2, 0, 4, 5, 1]
    # word_idx contains the indices in "topic" of the top num_top_words most relevant
    # to a given topic ... it is sorted ascending to begin with and then reversed (desc. now)    
    word_idx = np.argsort(comp)[::-1][:n_top_words]

    # store the words most relevant to the topic
    topic_words[topic] = [vocab[i] for i in word_idx]

Check out the results:

for topic, words in topic_words.items():
    print('Topic: %d' % topic)
    print('  %s' % ', '.join(words))

Topic: 0
  more, blah, here, hey, words
Topic: 1
  foo, bar, blah, baz, boy

You should obviously try this code with a much larger body of text, but this is one way to get the most informative words for a given number of topics.

like image 183
blacksite Avatar answered Sep 20 '22 23:09

blacksite