I have multiple documents that contain multiple sentences. I want to use doc2vec to cluster (e.g. k-means) the sentence vectors by using sklearn.
As such, the idea is that similar sentences are grouped together in several clusters. However, it is not clear to me if I have to train every single document separately and then use a clustering algorithm on the sentence vectors. Or, if I could infer a sentence vector from doc2vec without training every new sentence.
Right now this is a snippet of my code:
sentenceLabeled = []
for sentenceID, sentence in enumerate(example_sentences):
sentenceL = TaggedDocument(words=sentence.split(), tags = ['SENT_%s' %sentenceID])
sentenceLabeled.append(sentenceL)
model = Doc2Vec(size=300, window=10, min_count=0, workers=11, alpha=0.025,
min_alpha=0.025)
model.build_vocab(sentenceLabeled)
for epoch in range(20):
model.train(sentenceLabeled)
model.alpha -= 0.002 # decrease the learning rate
model.min_alpha = model.alpha # fix the learning rate, no decay
textVect = model.docvecs.doctag_syn0
## K-means ##
num_clusters = 3
km = KMeans(n_clusters=num_clusters)
km.fit(textVect)
clusters = km.labels_.tolist()
## Print Sentence Clusters ##
cluster_info = {'sentence': example_sentences, 'cluster' : clusters}
sentenceDF = pd.DataFrame(cluster_info, index=[clusters], columns = ['sentence','cluster'])
for num in range(num_clusters):
print()
print("Sentence cluster %d: " %int(num+1), end='')
print()
for sentence in sentenceDF.ix[num]['sentence'].values.tolist():
print(' %s ' %sentence, end='')
print()
print()
Basically, what I am doing right now is training on every labeled sentence in the document. However, if have the idea that this could be done in a simpler way.
Eventually, the sentences that contain similar words should be clustered together and be printed. At this point training every document separately, does not clearly reveal any logic within the clusters.
Hopefully someone can steer me in the right direction. Thanks.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With