When I want to put the model on the GPU, I get the following error:
"RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu"
However, all of the above had been put on the GPU:
for m in model.parameters():
print(m.device) #return cuda:0
if torch.cuda.is_available():
model = model.cuda()
test = test.cuda() # test is the Input
Windows 10 server
Pytorch 1.2.0 + cuda 9.2
cuda 9.2
cudnn 7.6.3 for cuda 9.2
Returns the value of this tensor as a standard Python number. This only works for tensors with one element.
In PyTorch, the torch. cuda package has additional support for CUDA tensor types, that implement the same function as CPU tensors, but they utilize GPUs for computation. If you want a tensor to be on GPU you can call . cuda().
A torch.Tensor is a multi-dimensional matrix containing elements of a single data type.
You need to move the model, the inputs, and the targets to Cuda:
if torch.cuda.is_available():
model.cuda()
inputs = inputs.cuda()
target = target.cuda()
This error occurs when PyTorch tries to compute an operation between a tensor stored on a CPU and one on a GPU. At a high level there are two types of tensor - those of your data, and those of the parameters of the model, and both can be copied to the same device like so:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
data = data.to(device)
model = model.to(device)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With