Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does Pyspark Calculate Doc2Vec from word2vec word embeddings?

I have a pyspark dataframe with a corpus of ~300k unique rows each with a "doc" that contains a few sentences of text in each.

After processing, I have a 200 dimension vectorized representation of each row/doc. My NLP Process:

  1. Remove Punctuation with regex udf
  2. Word Stemming with nltk snowball udf)
  3. Pyspark Tokenizer
  4. Word2Vec (ml.feature.Word2Vec, vectorSize=200, windowSize=5)

I understand how this implementation uses the skipgram model to create embeddings for each word based on the full corpus used. My question is: How does this implementation go from a vector for each word in the corpus to a vector for each document/row?

Is it the same processes as in the gensim doc2vec implementation where it simply concatenates the word vectors in each doc together?: How does gensim calculate doc2vec paragraph vectors. If so, how does it cut the vector down to the specified size of 200 (Does it use just the first 200 words? Average?)?

I was unable to find the information from the sourcecode: https://spark.apache.org/docs/2.2.0/api/python/_modules/pyspark/ml/feature.html#Word2Vec

Any help or reference material to look at is super appreciated!

like image 317
whs2k Avatar asked Jan 02 '18 16:01

whs2k


2 Answers

One simple way to go from word-vectors, to a single vector for a range-of-text, is to average the vectors together. And, that often works well-enough for some tasks.

However, that's not how the Doc2Vec class in gensim does it. That class implements the 'Paragraph Vectors' technique, where separate document-vectors are trained in a manner analogous to word-vectors.

The doc-vectors participate in training a bit like a floating synthetic word, involved in every sliding window/target-word-prediction. They're not composed-up or concatenated-from preexisting word-vectors, though in some modes they may be simultaneously trained alongside word-vectors. (However, the fast and often top-performing PV-DBOW mode, enabled in gensim with the parameter dm=0, doesn't train or use input-word-vectors at all. It just trains doc-vectors that are good for predicting the words in each text-example.)

Since you've mentioned multiple libraries (both Spark MLib and gensim), but you've not shown your code, it's not certain exactly what your existing process is doing.

like image 58
gojomo Avatar answered Sep 21 '22 01:09

gojomo


In Pyspark, ml.feature.Word2Vec is used to get the called doc2vec by calculating the average of word2vecs with the weight of term frequency (TF) in each doc. You can study the result of the official example in https://spark.apache.org/docs/2.2.0/api/python/_modules/pyspark/ml/feature.html#Word2Vec

like image 23
Slyer Avatar answered Sep 21 '22 01:09

Slyer