Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there any way or even possible to get the overall utilization of a GPU during a period of time?

I am trying to get the information about the overall utilization of a GPU (mine is an NVIDIA Tesla K20, running on Linux) during a period of time. By "overall" I mean something like, how many streaming multi-processors are scheduled to run, and how many GPU cores are scheduled to run (I suppose if a core is running, it will run at its full speed/frequency?). It would be also nice if I can get the overall utilization measured by flops.

Of course before asking the question here, I've searched and investigated several existing tools/libraries, including NVML (and nvidia-smi built on top of it), CUPTI (and nvprof), PAPI, TAU, and Vampir. However, it seems (but I am not sure yet) none of them could provide me with the needed information. E.g., NVML can report "GPU Utilization" by percent, but according to its document/comment, this utilization is "Percent of time over the past second during which one or more kernels was executing on the GPU", which is apparently not accurate enough. For nvprof, it can report flops for individual kernel (with very high overhead), but I still don't know how well the GPU is utilized.

PAPI seems to be able to get instruction count, but it cannot different float point operation from others. I haven't tried other two tools (TAU and Vampir) yet, but I doubt they can meet my need.

So I am wondering is it even possible to get the overall utilization information of a GPU? If not, what is the best alternative to estimate it? The purpose I am doing this is to find a better schedule for multiple jobs running on top of GPU.

I am not sure if I've described my question clearly enough, so please let me know if there is anything I can add for a better description.

Thank you very much!

like image 802
rsm Avatar asked Nov 06 '14 18:11

rsm


People also ask

How is GPU utilization measured?

To monitor the overall GPU resource usage, click the Performance tab, scroll down the left pane, and find the “GPU” option. Here you can watch real-time usage. It displays different graphs for what is happening with your system — like encoding videos or gameplay.

How long can my GPU run at 100?

Talking about the GPU, it will be worn stronger, than on usual office work for 8-16 hours a day, so when using on 100% 24/7/365 it is unlikely it will be able to work for 5-10 years and more.

Should GPU be at 100 while gaming?

All in all, it's normal for GPU usage to be around 95% to 100% while gaming and streaming. It means that you're using the component to the fullest extent and getting the best possible performance with the current PC configuration.

Is it OK to run GPU at 100?

It's normal for a GPU to work at 100%. Just make sure the temperatures are normal.


1 Answers

nVidia Nsight plugin to Visual Studio has very nice graphical features that give the statistics you want. But I have the feeling that you have a Linux machine so Nsight won't work.

I suggest using nVidia Visual Profiler.

The metrics reference is fairly complete and can be found here. This is how I would gather the data you are interested in:

  • Active SMX units - look at sm_efficiency. It should be close to 100%. If it's lower, then some of the SMX units are not active.

  • Active cores / SMX - This depends. K20 has a Quad-warp scheduler with dual instruction issue. A warp fires 32 SM cores. K20 has 192 SP cores and 64 DP cores. You need to look at ipc metric (instructions per cycle). If your program is DP and IPC is 2 then you have 100% utilization (for the entire workload execution). That means that 2 warps scheduled instructions so all your 64 DP cores were active during all the cycles. If your program is SP, your IPC theoretically should be 6. However in practice this is very hard to get. An IPC of 6, means that 3 of the schedulers launched 2 warps each, and gave work to 3 x 2 x 32 = 192 SP cores.

  • FLOPS - Well, if your program uses floating point operations, then I would look to flop_count_sp and divide it by the elapsed seconds.

Regarding frequency, I wouldn't worry but it doesn't harm to check with nvidia-smi. If your card has enough cooling then it will stay at peak frequency while running.

Check the metrics reference as it will provide you much more useful information.

I think NVprof also supports multiple processes. Check here. You can also filter by process ID. So you can collect these metrics "multi-context" or "single-context". In the metrics reference table, you have a column that states if they can be collected in both the cases.

Note: The metrics are computed using the HW performance counters, and driver level analysis. If nvidia tools cannot provide more than this, then it's not likely that other tools will be able to offer more. But I think that properly combining the metrics can tell you everything you want about your app run.

like image 170
VAndrei Avatar answered Oct 19 '22 12:10

VAndrei