Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to get the device type of a pytorch module conveniently?

I have to stack some my own layers on different kinds of pytorch models with different devices.

E.g. A is a cuda model and B is a cpu model (but I don't know it before I get the device type). Then the new models are C and D respectively, where

class NewModule(torch.nn.Module):
    def __init__(self, base):
        super(NewModule, self).__init__()
        self.base = base
        self.extra = my_layer() # e.g. torch.nn.Linear()

    def forward(self,x):
        y = self.base(x)
        z = self.extra(y)
        return z

...

C = NewModule(A) # cuda
D = NewModule(B) # cpu

However I must move base and extra to the same device, i.e. base and extra of C are cuda models and D's are cpu models. So I tried this __inin__:

def __init__(self, base):
    super(NewModule, self).__init__()
    self.base = base
    self.extra = my_layer().to(base.device)

Unfortunately, there's no attribute device in torch.nn.Module(raise AttributeError).

What should I do to get the device type of base? Or any other method to make base and extra to be on the same device automaticly even the structure of base is unspecific?

like image 828
Kani Avatar asked Nov 19 '19 03:11

Kani


People also ask

How do I get the model of a device in PyTorch?

You access the device by simply typing model. device as for parameters.

What is device in PyTorch?

device enables you to specify the device type responsible to load a tensor into memory. The function expects a string argument specifying the device type. You can even pass an ordinal like the device index. or leave it unspecified for PyTorch to use the currently available device.

What does cuda () do PyTorch?

cuda is used to set up and run CUDA operations. It keeps track of the currently selected GPU, and all CUDA tensors you allocate will by default be created on that device. The selected device can be changed with a torch. cuda.

How do I know if PyTorch graphics card is installed?

Check GPU Availability The easiest way to check if you have access to GPUs is to call torch. cuda. is_available(). If it returns True, it means the system has the Nvidia driver correctly installed.


1 Answers

This question has been asked many times (1, 2). Quoting the reply from a PyTorch developer:

That’s not possible. Modules can hold parameters of different types on different devices, and so it’s not always possible to unambiguously determine the device.

The recommended workflow (as described on PyTorch blog) is to create the device object separately and use that everywhere. Copy-pasting the example from the blog here:

# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

...

# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)

Do note that there is nothing stopping you from adding a .device property to the models.

As mentioned by Kani (in the comments), if the all the parameters in the model are on the same device, one could use next(model.parameters()).device.

like image 57
Shagun Sodhani Avatar answered Sep 19 '22 03:09

Shagun Sodhani