Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how to load the gpu trained model into the cpu?

I am using PyTorch. I am going to use the already trained model on multiple GPUs with CPU. how to do this task?

I tried on Anaconda 3 and pytorch with cpu only i dont have gpu

model = models.get_pose_net(config, is_train=False)
gpus = [int(i) for i in config.GPUS.split(',')]
model = torch.nn.DataParallel(model, device_ids=gpus).cuda()

print('Created model...')
print(model)
checkpoint = torch.load(config.MODEL.RESUME)
model.load_state_dict(checkpoint)
model.eval()
print('Loaded pretrained weights...')

the error i got is

    AssertionError                            Traceback (most recent call  last)
<ipython-input-15-bbfcd201d332> in <module>()
      2 model = models.get_pose_net(config, is_train=False)
      3 gpus = [int(i) for i in config.GPUS.split(',')]
----> 4 model = torch.nn.DataParallel(model, device_ids=gpus).cuda()
      5 print('Created model...')
      6 print(model)

C:\Users\psl\Anaconda3\lib\site-packages\torch\nn\modules\module.py in cuda(self, device)
    258             Module: self
    259         """
--> 260         return self._apply(lambda t: t.cuda(device))
    261 
    262     def cpu(self):

C:\Users\psl\Anaconda3\lib\site-packages\torch\nn\modules\module.py in 

_apply(self, fn) 185 def _apply(self, fn): 186 for module in self.children(): --> 187 module._apply(fn) 188 189 for param in self._parameters.values():

C:\Users\psl\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn)
    185     def _apply(self, fn):
    186         for module in self.children():
--> 187             module._apply(fn)
    188 
    189         for param in self._parameters.values():

C:\Users\psl\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn)
    191                 # Tensors stored in modules are graph leaves, and we don't
    192                 # want to create copy nodes, so we have to unpack the data.
--> 193                 param.data = fn(param.data)
    194                 if param._grad is not None:
    195                     param._grad.data = fn(param._grad.data)

C:\Users\psl\Anaconda3\lib\site-packages\torch\nn\modules\module.py in <lambda>(t)
    258             Module: self
    259         """
--> 260         return self._apply(lambda t: t.cuda(device))
    261 
    262     def cpu(self):

 C:\Users\psl\Anaconda3\lib\site-packages\torch\cuda\__init__.py in _lazy_init()
    159         raise RuntimeError(
    160             "Cannot re-initialize CUDA in forked subprocess. " + msg)
--> 161     _check_driver()
    162     torch._C._cuda_init()
    163     _cudart = _load_cudart()

C:\Users\psl\Anaconda3\lib\site-packages\torch\cuda\__init__.py in _check_driver()
     80 Found no NVIDIA driver on your system. Please check that you
     81 have an NVIDIA GPU and installed a driver from
---> 82 http://www.nvidia.com/Download/index.aspx""")
     83         else:
     84             # TODO: directly link to the alternative bin that needs install

AssertionError: 
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx
like image 607
Vijay Prabakaran Avatar asked Apr 04 '19 09:04

Vijay Prabakaran


People also ask

How do I load a GPU model?

When loading a model on a GPU that was trained and saved on CPU, set the map_location argument in the torch. load() function to cuda:device_id . This loads the model to a given GPU device.

How do you load Pretrained weights in PyTorch?

To load model weights, you need to create an instance of the same model first, and then load the parameters using load_state_dict() method. be sure to call model. eval() method before inferencing to set the dropout and batch normalization layers to evaluation mode.

How do I load a saved model in PyTorch?

A common PyTorch convention is to save these checkpoints using the .tar file extension. To load the models, first initialize the models and optimizers, then load the dictionary locally using torch.load() . From here, you can easily access the saved items by simply querying the dictionary as you would expect.


1 Answers

To force load the saved model onto cpu, use the following command.

torch.load('/path/to/saved/model', map_location='cpu')

In your case change it to

torch.load(config.MODEL.RESUME, map_location='cpu')
like image 141
papabiceps Avatar answered Oct 18 '22 07:10

papabiceps