Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

pytorch - use device inside 'with statement'

Is there a way of running pytorch inside the context of a specific (GPU) device (without having to specify the device for each new tensor, such as the .to option)?

Something like an equivalent of the tensorflow with tf.device('/device:GPU:0'):..

It seems that the default device is the cpu (unless I'm doing it wrong):

with torch.cuda.device('0'):
   a = torch.zeros(1)
   print(a.device)

>>> cpu
like image 756
nivniv Avatar asked Sep 15 '25 07:09

nivniv


1 Answers

Unfortunately in the current implementation the with-device statement doesn't work this way, it can just be used to switch between cuda devices.


You still will have to use the device parameter to specify which device is used (or .cuda() to move the tensor to the specified GPU), with a terminology like this when:

# allocates a tensor on GPU 1
a = torch.tensor([1., 2.], device=cuda)

So to access cuda:1:

cuda = torch.device('cuda')

with torch.cuda.device(1):
    # allocates a tensor on GPU 1
    a = torch.tensor([1., 2.], device=cuda)

And to access cuda:2:

cuda = torch.device('cuda')

with torch.cuda.device(2):
    # allocates a tensor on GPU 2
    a = torch.tensor([1., 2.], device=cuda)

However tensors without the device parameter will still be CPU tensors:

cuda = torch.device('cuda')

with torch.cuda.device(1):
    # allocates a tensor on CPU
    a = torch.tensor([1., 2.])

To sum it up:

No - unfortunately it is in the current implementation of the with-device statement not possible to use in a way you described in your question.


Here are some more examples from the documentation:

cuda = torch.device('cuda')     # Default CUDA device
cuda0 = torch.device('cuda:0')
cuda2 = torch.device('cuda:2')  # GPU 2 (these are 0-indexed)

x = torch.tensor([1., 2.], device=cuda0)
# x.device is device(type='cuda', index=0)
y = torch.tensor([1., 2.]).cuda()
# y.device is device(type='cuda', index=0)

with torch.cuda.device(1):
    # allocates a tensor on GPU 1
    a = torch.tensor([1., 2.], device=cuda)

    # transfers a tensor from CPU to GPU 1
    b = torch.tensor([1., 2.]).cuda()
    # a.device and b.device are device(type='cuda', index=1)

    # You can also use ``Tensor.to`` to transfer a tensor:
    b2 = torch.tensor([1., 2.]).to(device=cuda)
    # b.device and b2.device are device(type='cuda', index=1)

    c = a + b
    # c.device is device(type='cuda', index=1)

    z = x + y
    # z.device is device(type='cuda', index=0)

    # even within a context, you can specify the device
    # (or give a GPU index to the .cuda call)
    d = torch.randn(2, device=cuda2)
    e = torch.randn(2).to(cuda2)
    f = torch.randn(2).cuda(cuda2)
    # d.device, e.device, and f.device are all device(type='cuda', index=2)
like image 118
MBT Avatar answered Sep 17 '25 19:09

MBT