Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to run PyTorch on GPU by default?

Tags:

python

pytorch

I want to run PyTorch using cuda. I set model.cuda() and torch.cuda.LongTensor() for all tensors.

Do I have to create tensors using .cuda explicitly if I have used model.cuda()?

Is there a way to make all computations run on GPU by default?

like image 818
tian tong Avatar asked May 05 '17 13:05

tian tong


People also ask

How do I know which GPU PyTorch is using?

PyTorch’s CUDA library enables you to keep track of which GPU you are using and causes any tensors you create to be automatically assigned to that device. After a tensor is allocated, you can perform operations with it and the results are also assigned to the same device. By default, within PyTorch, you cannot use cross-GPU operations.

How to transfer tensors from CPU to GPU in PyTorch?

We can use an API to transfer tensors from CPU to GPU, and this logic is followed in models as well. The device is a variable initialized in PyTorch so that it can be used to hold the device where the training is happening either in CPU or GPU.

What is the device in PyTorch?

The device is a variable initialized in PyTorch so that it can be used to hold the device where the training is happening either in CPU or GPU. device = torch. device ("cuda:4" if torch. cuda. is_available () else "cpu") print( device) torch. cuda package supports CUDA tensor types but works with GPU computations.

Does PyTorch work with CUDA?

PyTorch GPU Working with CUDA in PyTorch PyTorch is an open source machine learning framework that enables you to perform scientific and tensor computations. You can use PyTorch to speed up deep learning with GPUs.


2 Answers

I do not think you can specify that you want to use cuda tensors by default. However you should have a look to the pytorch offical examples.

In the imagenet training/testing script, they use a wrapper over the model called DataParallel. This wrapper has two advantages:

  • it handles the data parallelism over multiple GPUs
  • it handles the casting of cpu tensors to cuda tensors

As you can see in L164, you don't have to cast manually your inputs/targets to cuda.

Note that, if you have multiple GPUs and you want to use a single one, launch any python/pytorch scripts with the CUDA_VISIBLE_DEVICES prefix. For instance CUDA_VISIBLE_DEVICES=0 python main.py.

like image 133
Remi Avatar answered Sep 24 '22 05:09

Remi


Yes. You can set the default tensor type to cuda with:

torch.set_default_tensor_type('torch.cuda.FloatTensor')

Do I have to create tensors using .cuda explicitly if I have used model.cuda()?

Yes, you need to not only set your model [parameter] tensors to cuda, but also those of the data features and targets (and any other tensors used by the model).

like image 44
iacob Avatar answered Sep 22 '22 05:09

iacob