Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to clear GPU memory after using model?

Tags:

pytorch

I'm trying to free up GPU memory after finishing using the model.

  • I checked the nvidia-smi before creating and trainning the model: 402MiB / 7973MiB
  • After creating and training the model, I checked again the GPU memory status with nvidia-smi: 7801MiB / 7973MiB
  • Now I tried to free up GPU memory with:
del model
torch.cuda.empty_cache() 
gc.collect()

and checked again the GPU memory: 2361MiB / 7973MiB

  • As you can see not all the GPU memory was released (I expected to get 400~MiB / 7973MiB).
  • I can only relase the GPU memory via terminal (sudo fuser -v /dev/nvidia* and kill pid)

Is there a way to free up the GPU memory after I done using the model ?

like image 702
user3668129 Avatar asked Apr 09 '26 13:04

user3668129


1 Answers

This happens becauce pytorch reserves the gpu memory for fast memory allocation. To learn more about it, see pytorch memory management. To solve this issue, you can use the following code:

from numba import cuda
cuda.select_device(your_gpu_id)
cuda.close()

However, this comes with a catch. It closes the GPU completely. So, you can't start training without restarting everything.

like image 71
Labiba Kanij Avatar answered Apr 23 '26 10:04

Labiba Kanij



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!