In LDA model generates different topics everytime i train on the same corpus , by setting the np.random.seed(0)
, the LDA model will always be initialized and trained in exactly the same way.
Is it the same for the Word2Vec models from gensim
? By setting the random seed to a constant, would the different run on the same dataset produce the same model?
But strangely, it's already giving me the same vector at different instances.
>>> from nltk.corpus import brown
>>> from gensim.models import Word2Vec
>>> sentences = brown.sents()[:100]
>>> model = Word2Vec(sentences, size=10, window=5, min_count=5, workers=4)
>>> model[word0]
array([ 0.04985042, 0.02882229, -0.03625415, -0.03165979, 0.06049283,
0.01207791, 0.04722737, 0.01984878, -0.03026265, 0.04485954], dtype=float32)
>>> model = Word2Vec(sentences, size=10, window=5, min_count=5, workers=4)
>>> model[word0]
array([ 0.04985042, 0.02882229, -0.03625415, -0.03165979, 0.06049283,
0.01207791, 0.04722737, 0.01984878, -0.03026265, 0.04485954], dtype=float32)
>>> model = Word2Vec(sentences, size=20, window=5, min_count=5, workers=4)
>>> model[word0]
array([ 0.02596745, 0.01475067, -0.01839622, -0.01587902, 0.03079717,
0.00586761, 0.02367715, 0.00930568, -0.01521437, 0.02213679,
0.01043982, -0.00625582, 0.00173071, -0.00235749, 0.01309298,
0.00710233, -0.02270884, -0.01477827, 0.01166443, 0.00283862], dtype=float32)
>>> model = Word2Vec(sentences, size=20, window=5, min_count=5, workers=4)
>>> model[word0]
array([ 0.02596745, 0.01475067, -0.01839622, -0.01587902, 0.03079717,
0.00586761, 0.02367715, 0.00930568, -0.01521437, 0.02213679,
0.01043982, -0.00625582, 0.00173071, -0.00235749, 0.01309298,
0.00710233, -0.02270884, -0.01477827, 0.01166443, 0.00283862], dtype=float32)
>>> exit()
alvas@ubi:~$ python
Python 2.7.11 (default, Dec 15 2015, 16:46:19)
[GCC 4.8.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from nltk.corpus import brown
>>> from gensim.models import Word2Vec
>>> sentences = brown.sents()[:100]
>>> model = Word2Vec(sentences, size=10, window=5, min_count=5, workers=4)
>>> word0 = sentences[0][0]
>>> model[word0]
array([ 0.04985042, 0.02882229, -0.03625415, -0.03165979, 0.06049283,
0.01207791, 0.04722737, 0.01984878, -0.03026265, 0.04485954], dtype=float32)
>>> model = Word2Vec(sentences, size=20, window=5, min_count=5, workers=4)
>>> model[word0]
array([ 0.02596745, 0.01475067, -0.01839622, -0.01587902, 0.03079717,
0.00586761, 0.02367715, 0.00930568, -0.01521437, 0.02213679,
0.01043982, -0.00625582, 0.00173071, -0.00235749, 0.01309298,
0.00710233, -0.02270884, -0.01477827, 0.01166443, 0.00283862], dtype=float32)
Is it true then that the default random seed is fixed? If so, what is the default random seed number? Or is it because I'm testing on a small dataset?
If it's true that the the random seed is fixed and different runs on the same data returns the same vectors, a link to a canonical code or documentation would be much appreciated.
As per the docs of Gensim, for executing a fully deterministically-reproducible run, you must also limit the model to a single worker thread, to eliminate ordering jitter from OS thread scheduling.
A simple parameter edit to your code should do the trick.
model = Word2Vec(sentences, size=10, window=5, min_count=5, workers=1)
Just a remark on the randomness.
If one is working with gensim's W2V model and is using Python version >= 3.3, keep in mind that hash randomisation is turned on by default. If you're seeking consistency between two executions, make sure to set the PYTHONHASHSEED
environment variable. E.g. when running your code like so
PYTHONHASHSEED=123 python3 mycode.py
, next time you generate a model (using the same hash seed) it would be the same as previously generated model (provided, that all other randomness control steps are followed, as mentioned above - random state and single worker).
See gensim's W2V source and Python docs for details.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With