My CUDA program crashed during execution, before memory was flushed. As a result, device memory remained occupied.
I'm running on a GTX 580, for which nvidia-smi --gpu-reset
is not supported.
Placing cudaDeviceReset()
in the beginning of the program is only affecting the current context created by the process and doesn't flush the memory allocated before it.
I'm accessing a Fedora server with that GPU remotely, so physical reset is quite complicated.
So, the question is - Is there any way to flush the device memory in this situation?
In my model, it appears that “cuda runtime error(2): out of memory” is occurring due to a GPU memory drain. Because PyTorch typically manages large amounts of data, failure to recognize small errors can cause your program to crash to the ground without all its GPU being available.
check what is using your GPU memory with
sudo fuser -v /dev/nvidia*
Your output will look something like this:
USER PID ACCESS COMMAND /dev/nvidia0: root 1256 F...m Xorg username 2057 F...m compiz username 2759 F...m chrome username 2777 F...m chrome username 20450 F...m python username 20699 F...m python
Then kill the PID that you no longer need on htop
or with
sudo kill -9 PID.
In the example above, Pycharm was eating a lot of memory so I killed 20450 and 20699.
First type
nvidia-smi
then select the PID that you want to kill
sudo kill -9 PID
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With