Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Measuring peak nvidia GPU memory usage on linux

Tags:

linux

bash

cuda

gpu

To measure the GPU memory currently used by my programs I can use the following command (ubuntu linux, nvidia GPU):

while true; do nvidia-smi --query-gpu=memory.used --format=csv; sleep .5; done|grep -v memory

It will regularly output values like this :

70 MiB
74 MiB
75 MiB
76 MiB
77 MiB
77 MiB
70 MiB

Is it possible to modify the command to always display the maximum value instead of the latest?

(in a bash-only way, if possible)

like image 284
bct Avatar asked Aug 31 '25 02:08

bct


2 Answers

No sure, But you can give a try on like this.

a=0
while true; do 
b=$(nvidia-smi --query-gpu=memory.used --format=csv|grep -v memory|awk '{print $1}')
[ $b -gt $a ] && a=$b && echo $a 
sleep .5
done
like image 164
Sriharsha Kalluru Avatar answered Sep 02 '25 17:09

Sriharsha Kalluru


Run the following command, and replace <gpu_id> with the gpu in question. You can get the gpu_id by running nvidia-smi

nvidia-smi --query-gpu=memory.used --format=csv -i <gpu-id> -l 1
like image 39
AnikethSuresh Avatar answered Sep 02 '25 16:09

AnikethSuresh