This question is for those who are familiar with GPT or GPT2 OpenAI models. In particular, with the encoding task (Byte-Pair Encoding). This is my problem:
I would like to know how I could create my own vocab.bpe file.
I have a spanish corpus text that I would like to use to fit my own bpe encoder. I have succeedeed in creating the encoder.json with the python-bpe library, but I have no idea on how to obtain the vocab.bpe file. I have reviewed the code in gpt-2/src/encoder.py but, I have not been able to find any hint. Any help or idea?
Thank you so much in advance.
check out here, you can easily create the same vocab.bpe using the following command:
python learn_bpe -o ./vocab.bpe -i dataset.txt --symbols 50000
I haven't worked with GPT2, but bpemb is a very good place to start for subword embeddings. According to the README
BPEmb is a collection of pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE) and trained on Wikipedia. Its intended use is as input for neural models in natural language processing.
I've used the pretrained embeddings for one of my projects along with sentencepiece and it turned out to be very useful.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With