Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I create and fit vocab.bpe file (GPT and GPT2 OpenAI models) with my own corpus text?

This question is for those who are familiar with GPT or GPT2 OpenAI models. In particular, with the encoding task (Byte-Pair Encoding). This is my problem:

I would like to know how I could create my own vocab.bpe file.

I have a spanish corpus text that I would like to use to fit my own bpe encoder. I have succeedeed in creating the encoder.json with the python-bpe library, but I have no idea on how to obtain the vocab.bpe file. I have reviewed the code in gpt-2/src/encoder.py but, I have not been able to find any hint. Any help or idea?

Thank you so much in advance.

like image 964
rafaelmg07 Avatar asked Apr 05 '19 08:04

rafaelmg07


2 Answers

check out here, you can easily create the same vocab.bpe using the following command:

python learn_bpe -o ./vocab.bpe -i dataset.txt --symbols 50000
like image 74
vpcom Avatar answered Oct 23 '22 19:10

vpcom


I haven't worked with GPT2, but bpemb is a very good place to start for subword embeddings. According to the README

BPEmb is a collection of pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE) and trained on Wikipedia. Its intended use is as input for neural models in natural language processing.

I've used the pretrained embeddings for one of my projects along with sentencepiece and it turned out to be very useful.

like image 37
scarecrow Avatar answered Oct 23 '22 20:10

scarecrow