I'm doing a Text Classification (NLP) model using fastai train on googlecolab (gpu) after I load the model using load_learner without any error but when I change the cpu usage, I get an error "RuntimeError: _th_index_select not supported on CPUType for Half" Is there any way for me to predict cpu usage results?
from fastai import *
from fastai.text import *
from sklearn.metrics import f1_score
defaults.device = torch.device('cpu')
@np_func
def f1(inp,targ): return f1_score(targ, np.argmax(inp, axis=-1))
path = Path('/content/drive/My Drive/Test_fast_ai')
learn = load_learner(path)
learn.predict("so sad")
RuntimeError Traceback (most recent call last)
<ipython-input-13-3775eb2bfe91> in <module>()
----> 1 learn.predict("so sad")
11 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1504 # remove once script supports set_grad_enabled
1505 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1506 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1507
1508
RuntimeError: _th_index_select not supported on CPUType for Half
I was having the same issue. Are you training your model with the parameter to_fp16()
? I fix the issue by removing this parameter from the learner. For instance, when I was training with the following command line I got RuntimeError when using the model to predict in a cpu environment.
learn_c = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5, metrics=[accuracy]).to_fp16()
To fix I just remove the suffix .to_fp16()
and everything went smoothly.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With