Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to predownload a transformers model

I want to perform a text generation task in a flask app and host it on a web server however when downloading the GPT models the elastic beanstalk managed EC2 instance crashes because the download takes too much time and memory

from transformers.tokenization_openai import OpenAIGPTTokenizer
from transformers.modeling_tf_openai import TFOpenAIGPTLMHeadModel
model = TFOpenAIGPTLMHeadModel.from_pretrained("openai-gpt")
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")

These are the lines in question causing the issue. GPT is approx 445 MB. I am using the transformers library. Instead of downloading the model at this line I was wondering if I could pickle the model and then bundle it as part of the repository. Is that possible with this library? Otherwise how can I preload this model to avoid the issues I am having?

like image 995
Josh Zwiebel Avatar asked Dec 10 '22 00:12

Josh Zwiebel


2 Answers

Approach 1:

Search for the model here: https://huggingface.co/models

Download the model from this link:

pytorch-model: https://s3.amazonaws.com/models.huggingface.co/bert/openai-gpt-pytorch_model.bin

tensorflow-model: https://s3.amazonaws.com/models.huggingface.co/bert/openai-gpt-tf_model.h5

The config file: https://s3.amazonaws.com/models.huggingface.co/bert/openai-gpt-config.json

Source: https://huggingface.co/transformers/_modules/transformers/configuration_openai.html#OpenAIGPTConfig

You can manually download the model (in your case TensorFlow model .h5 and the config.json file), put it in a folder (let's say model) in the repository. (you can try compressing the model, and then decompressing once it's in the ec2 instance if needed)

Then, you can directly load the model in your web server from the path instead of downloading (model folder which contains the .h5 and config.json):

model = TFOpenAIGPTLMHeadModel.from_pretrained("model") 
# model folder contains .h5 and config.json
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt") 
# this is a light download

Approach 2:

Instead of using links to download, you can download the model in your local machine using the conventional method.

from transformers.tokenization_openai import OpenAIGPTTokenizer
from transformers.modeling_tf_openai import TFOpenAIGPTLMHeadModel
model = TFOpenAIGPTLMHeadModel.from_pretrained("openai-gpt")
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")

This downloads the model. Now you can save the weights in a folder using save_pretrained function.

model.save_pretrained('/content/') # saving inside content folder

Now, the content folder should contain a .h5 file and a config.json.

Just upload them to the repository and load from that.

like image 99
Zabir Al Nazi Avatar answered Jan 22 '23 11:01

Zabir Al Nazi


Open https://huggingface.co/models and search the model you want. Click on the model name and finnaly click on "List all files in model". You will get a list of the files you can download.

like image 34
Manuel Alves Avatar answered Jan 22 '23 11:01

Manuel Alves