In pytorch, if I'm not writing anything about using CPU/GPU, and my machine supports CUDA (torch.cuda.is_available() == True
):
torch.cuda.is_available() == False
?My way is like this (below pytorch 0.4):
dtype = torch.cuda.float if torch.cuda.is_available() else torch.float
torch.zeros(2, 2, dtype=dtype)
UPDATE pytorch 0.4:
device = torch.device("cuda" if use_cuda else "cpu")
model = MyRNN().to(device)
from PyTorch 0.4.0 Migration Guide.
1. What is my script using, CPU or GPU?
The "script" does not have any device alegiance. Where computations are done (CPU or GPU) depends on the specific tensor being operated on. Hence it depends on how the tensor was created.
However the default location for the torch.tensor
function to create tensors is set to 'cpu'
:
torch.FloatTensor() # CPU tensor
torch.cuda.FloatTensor() # GPU tensor
torch.tensor(device='cpu') # CPU tensor
torch.tensor(device='cuda') # GPU tensor
torch.tensor([1,2]) # CPU tensor <--
2. If CPU, what should I do to make it run on GPU?
You can change the default type of each newly created torch.tensor
with:
# Approach 1
torch.set_default_tensor_type('torch.cuda.FloatTensor')
Or you can manually copy each tensor to the GPU:
# Approach 2
device = "cuda" if torch.cuda.is_availble() else "cpu"
my_tensor = my_tensor.to(device)
my_model.to(device) # Operates in place for model parameters
3. If GPU, will this script crash if
torch.cuda.is_available() == False
?
Yes, in Approach 1 the script would crash with the following error:
RuntimeError: No CUDA GPUs are available
In approach 2 it will simply default to CPU.
4. Does this do anything about making the training faster?
That depends. For most common PyTorch neural net training scenarios yes speed will be improved by moving to the GPU.
5. I'm aware of Porting PyTorch code from CPU to GPU but this is old. Does this situation change in v0.4 or the upcoming v1.0?
There are a number of ways to port code from CPU to GPU:
# Syntax 1
my_tensor = my_tensor.cuda()
# Syntax 2
device = 'cuda' if torch.cuda.is_available() else 'cpu'
my_tensor = my_tensor.to(device)
Syntax 2 is often preferred for allowing a switch between CPU and GPU by changing one variable.
PyTorch defaults to the CPU, unless you use the .cuda()
methods on your models and the torch.cuda.XTensor
variants of PyTorch's tensors.
You should write your code so that it will use GPU processing if torch.cuda.is_available == True
:
if torch.cuda.is_available():
model.cuda()
else:
# Do Nothing. Run as CPU.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With