For the model xx_ent_wiki_sm
of 2.0 version of SpaCy there is mention of "WikiNER" dataset, which leads to article 'Learning multilingual named entity recognition from Wikipedia'.
Is there any resource for downloading of such dataset for retraining that model? Or script for Wikipedia dump processing?
The data server from Joel (and my) former researcher group seems to be offline: http://downloads.schwa.org/wikiner
I found a mirror of the wp3 files here, which are the ones I'm using in spaCy: https://github.com/dice-group/FOX/tree/master/input/Wikiner
To retrain the spaCy model, you'll need to create a train/dev split (I'll get mine online for direct comparison, but for now...just take a random cut), and name the files with the .iob extension. Then use:
spacy convert -n 10 /path/to/file.iob /output/directory
The -n 10 argument is important for use in spaCy: it concatenates sentences into 'pseudo-paragraphs' of 10 sentences each. This lets the model learn that documents can come with multiple sentences.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With