I've tried several methods of loading the google news word2vec vectors (https://code.google.com/archive/p/word2vec/):
en_nlp = spacy.load('en',vector=False) en_nlp.vocab.load_vectors_from_bin_loc('GoogleNews-vectors-negative300.bin')
The above gives:
MemoryError: Error assigning 18446744072820359357 bytes
I've also tried with the .gz packed vectors; or by loading and saving them with gensim to a new format:
from gensim.models.word2vec import Word2Vec model = Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True) model.save_word2vec_format('googlenews2.txt')
This file then contains the words and their word vectors on each line. I tried to load them with:
en_nlp.vocab.load_vectors('googlenews2.txt')
but it returns "0".
What is the correct way to do this?
Update:
I can load my own created file into spacy. I use a test.txt file with "string 0.0 0.0 ...." on each line. Then zip this txt with .bzip2 to test.txt.bz2. Then I create a spacy compatible binary file:
spacy.vocab.write_binary_vectors('test.txt.bz2', 'test.bin')
That I can load into spacy:
nlp.vocab.load_vectors_from_bin_loc('test.bin')
This works! However, when I do the same process for the googlenews2.txt, I get the following error:
lib/python3.6/site-packages/spacy/cfile.pyx in spacy.cfile.CFile.read_into (spacy/cfile.cpp:1279)() OSError:
Load the vectors in Spacy using: The word2vec model accuracy can be improved by using different parameters for training, different corpus sizes or a different model architecture.
It includes word vectors for a vocabulary of 3 million words and phrases.
For spacy 1.x, load Google news vectors into gensim and convert to a new format (each line in .txt contains a single vector: string, vec):
from gensim.models.word2vec import Word2Vec from gensim.models import KeyedVectors model = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True) model.wv.save_word2vec_format('googlenews.txt')
Remove the first line of the .txt:
tail -n +2 googlenews.txt > googlenews.new && mv -f googlenews.new googlenews.txt
Compress the txt as .bz2:
bzip2 googlenews.txt
Create a SpaCy compatible binary file:
spacy.vocab.write_binary_vectors('googlenews.txt.bz2','googlenews.bin')
Move the googlenews.bin to /lib/python/site-packages/spacy/data/en_google-1.0.0/vocab/googlenews.bin of your python environment.
Then load the wordvectors:
import spacy nlp = spacy.load('en',vectors='en_google')
or load them after later:
nlp.vocab.load_vectors_from_bin_loc('googlenews.bin')
I know that this question has already been answered, but I am going to offer a simpler solution. This solution will load google news vectors into a blank spacy nlp object.
import gensim import spacy # Path to google news vectors google_news_path = "path\to\google\news\\GoogleNews-vectors-negative300.bin.gz" # Load google news vecs in gensim model = gensim.models.KeyedVectors.load_word2vec_format(gn_path, binary=True) # Init blank english spacy nlp object nlp = spacy.blank('en') # Loop through range of all indexes, get words associated with each index. # The words in the keys list will correspond to the order of the google embed matrix keys = [] for idx in range(3000000): keys.append(model.index2word[idx]) # Set the vectors for our nlp object to the google news vectors nlp.vocab.vectors = spacy.vocab.Vectors(data=model.syn0, keys=keys) >>> nlp.vocab.vectors.shape (3000000, 300)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With