Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

If I'm not specifying to use CPU/GPU, which one is my script using?

Tags:

python

pytorch

In pytorch, if I'm not writing anything about using CPU/GPU, and my machine supports CUDA (torch.cuda.is_available() == True):

  1. What is my script using, CPU or GPU?
  2. If CPU, what should I do to make it run on GPU? Do I need to rewrite everything?
  3. If GPU, will this script crash if torch.cuda.is_available() == False?
  4. Does this do anything about making the training faster?
  5. I'm aware of Porting PyTorch code from CPU to GPU but this is old. Does this situation change in v0.4 or the upcoming v1.0?
like image 540
xxbidiao Avatar asked May 23 '18 18:05

xxbidiao


4 Answers

My way is like this (below pytorch 0.4):

dtype = torch.cuda.float if torch.cuda.is_available() else torch.float
torch.zeros(2, 2, dtype=dtype)

UPDATE pytorch 0.4:

device = torch.device("cuda" if use_cuda else "cpu")
model = MyRNN().to(device)

from PyTorch 0.4.0 Migration Guide.

like image 180
Ria Avatar answered Nov 19 '22 08:11

Ria


1. What is my script using, CPU or GPU?

The "script" does not have any device alegiance. Where computations are done (CPU or GPU) depends on the specific tensor being operated on. Hence it depends on how the tensor was created.

However the default location for the torch.tensor function to create tensors is set to 'cpu':

torch.FloatTensor()         # CPU tensor
torch.cuda.FloatTensor()    # GPU tensor

torch.tensor(device='cpu')  # CPU tensor
torch.tensor(device='cuda') # GPU tensor

torch.tensor([1,2])         # CPU tensor  <--

2. If CPU, what should I do to make it run on GPU?

You can change the default type of each newly created torch.tensor with:

# Approach 1
torch.set_default_tensor_type('torch.cuda.FloatTensor')

Or you can manually copy each tensor to the GPU:

# Approach 2
device = "cuda" if torch.cuda.is_availble() else "cpu"

my_tensor = my_tensor.to(device)
my_model.to(device) # Operates in place for model parameters

3. If GPU, will this script crash if torch.cuda.is_available() == False?

Yes, in Approach 1 the script would crash with the following error:

RuntimeError: No CUDA GPUs are available

In approach 2 it will simply default to CPU.


4. Does this do anything about making the training faster?

That depends. For most common PyTorch neural net training scenarios yes speed will be improved by moving to the GPU.


5. I'm aware of Porting PyTorch code from CPU to GPU but this is old. Does this situation change in v0.4 or the upcoming v1.0?

There are a number of ways to port code from CPU to GPU:

# Syntax 1
my_tensor = my_tensor.cuda()

# Syntax 2
device = 'cuda' if torch.cuda.is_available() else 'cpu'
my_tensor = my_tensor.to(device)

Syntax 2 is often preferred for allowing a switch between CPU and GPU by changing one variable.

like image 28
iacob Avatar answered Nov 19 '22 08:11

iacob


PyTorch defaults to the CPU, unless you use the .cuda() methods on your models and the torch.cuda.XTensor variants of PyTorch's tensors.

like image 7
Omegastick Avatar answered Nov 19 '22 08:11

Omegastick


You should write your code so that it will use GPU processing if torch.cuda.is_available == True:

if torch.cuda.is_available():
    model.cuda()
else:
    # Do Nothing. Run as CPU.
like image 2
takethelongsh0t Avatar answered Nov 19 '22 09:11

takethelongsh0t