Suppose I create a tensor and put it on the GPU and don't need it later and want to free the GPU memory allocated to it; How do I do it?
import torch
a=torch.randn(3,4).cuda() # nvidia-smi shows that some mem has been allocated.
# do something
# a does not exist and nvidia-smi shows that mem has been freed.
I have tried:
del a
del a; torch.cuda.empty_cache()
But none of them work.
Running del tensor
frees the memory from the GPU but does not return it to the device which is why the memory still being shown as used on nvidia-smi
. You can create a new tensor and that would reuse that memory.
https://discuss.pytorch.org/t/how-to-delete-pytorch-objects-correctly-from-memory/947 https://discuss.pytorch.org/t/about-torch-cuda-empty-cache/34232
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With