Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

GPU usage shows zero when CUDA with PyTorch using on Windows

I have pytorch script.

import torch

torch.cuda.is_available() 
# True

device=torch.device('cuda:0') 
# I moved my tensors to device

But Windows Task Manager shows zero GPU (NVIDIA GTX 1050TI) usage when pytorch script running Speed of my script is fine and if I had changing torch.device to CPU instead GPU a speed become slower, therefore cuda (GPU) is working. Why Windows Task Manager doesn't show GPU usage?

Sample of my code:

device=torch.device("cuda:0")
model=torch.load('mymodel.pth', map_location=torch.device(device))
image=Image.open('picture.png').convert('RGB')
transform=transforms.Compose([
            transforms.Resize(224),
            transforms.CenterCrop(224),
            transforms.ToTensor(),
            transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ])
input=transform(image)
input=torch.unsqueeze(input, 0)
input=input.to(device)
output=model(input)
like image 517
texnicii Avatar asked Mar 12 '26 21:03

texnicii


2 Answers

Windows task manager overall utilization does not seem to include cuda usage. Make sure you select the cuda option in the graphs.

For details see: https://medium.com/@michaelceber/gpu-monitoring-on-windows-10-for-machine-learning-cuda-41088de86d65

like image 145
mixophyes Avatar answered Mar 15 '26 13:03

mixophyes


Just calling torch.device('cuda:0') doesn't actually use the GPU. It's just an identifier for a device.

Instead, following the documentation, you should move your tensors and models to the GPU.

torch.randn((2,3), device=torch.device('cuda:0'))
# Or
tensor = torch.randn((2,3))
cuda0 = torch.device('cuda:0')
tensor.to(cuda0)
like image 34
Arya McCarthy Avatar answered Mar 15 '26 11:03

Arya McCarthy



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!