I want to construct word embeddings for documents using word2vec tool. I know how to find a vector embedding corresponding to a single word(unigram). Now, I want to find a vector for a bigram. Is it possible to do using word2vec? If yes, how?
The following snippet will get you the vector representation of a bigram. Note that the bigram you want to convert to a vector needs to have an underscore instead of a space between the words, e.g. bigram2vec(unigrams, "this report")
is wrong, it should be bigram2vec(unigrams, "this_report")
. For more details on generating the unigrams, please see the gensim.models.word2vec.Word2Vec
class here.
from gensim.models import word2vec
def bigram2vec(unigrams, bigram_to_search):
bigrams = Phrases(unigrams)
model = word2vec.Word2Vec(bigrams[unigrams])
if bigram_to_search in model.vocab.keys():
return model[bigram_to_search]
else:
return None
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With