I am getting this error on running the script in ubuntu 16.04 . Please bear with me , i am new to python , I have checked the already available options on internet but i couldnt fix it.
RuntimeError: cuda runtime error (10) : invalid device ordinal at torch/csrc/cuda/Module.cpp:32
I am currently running this file .
from __future__ import print_function
from models import LipRead
import torch
import toml
from training import Trainer
from validation import Validator
print("Loading options...")
with open('options.toml', 'r') as optionsFile:
options = toml.loads(optionsFile.read())
if(options["general"]["usecudnnbenchmark"] and options["general"] ["usecudnn"]):
print("Running cudnn benchmark...")
torch.backends.cudnn.benchmark = True
#Create the model.
model = LipRead(options)
if(options["general"]["loadpretrainedmodel"]):
model.load_state_dict(torch.load(options["general"] ["pretrainedmodelpath"]))
#Move the model to the GPU.
if(options["general"]["usecudnn"]):
model = model.cuda(options["general"]["gpuid"])
trainer = Trainer(options)
validator = Validator(options)
for epoch in range(options["training"]["startepoch"], options["training"]["epochs"]):
if(options["training"]["train"]):
trainer.epoch(model, epoch)
if(options["validation"]["validate"]):
validator.epoch(model)
And I doubt this file has something to do with the error popped
Title = "TOML Example"
[general]
usecudnn = true
usecudnnbenchmark = true
gpuid = 0
loadpretrainedmodel = true
pretrainedmodelpath = "trainedmodel.pt"
savemodel = true
modelsavepath = "savedmodel.pt"
[input]
batchsize = 18
numworkers = 18
shuffle = true
[model]
type = "LSTM"
inputdim = 256
hiddendim = 256
numclasses = 500
numlstms = 2
[training]
train = true
epochs = 15
startepoch = 10
statsfrequency = 1000
dataset = "/udisk/pszts-ssd/AV-ASR-data/BBC_Oxford/lipread_mp4"
learningrate = 0.003
momentum = 0.9
weightdecay = 0.0001
[validation]
validate = true
dataset = "/udisk/pszts-ssd/AV-ASR-data/BBC_Oxford/lipread_mp4"
saveaccuracy = true
accuracyfilelocation = "accuracy.txt"
The error is mostly in the gpuid line as i have finally reached.
Try doing this
import torch
print(torch.cuda.is_available())
If you get the output to be False, that means PyTorch hasn't detected the GPU. I had the same issue and reinstalling Pytorch worked for me. You might also want to look at this https://github.com/pytorch/pytorch/issues/6098 .
The pre-trained weights might be mapped to a different gpuid. If a model pre-trained on multiple Cuda devices is small enough, it might be possible to run it on a single GPU. This is assuming at least batch of size 1 fits in the available GPU and RAM.
#WAS
model.load_state_dict(torch.load(final_model_file, map_location={'cuda:0':'cuda:1'}))
#IS
model.load_state_dict(torch.load(final_model_file, map_location={'cuda:0':'cuda:0'}))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With