I am using pytorch to train a DQN model. With ubuntu, if I use htop, I get
As you can see, all resources are used and I am a bit worried about that. Here is my code.
Is there a way to use less resource? Do I have to add my requirement using pytorch?
Be aware that there's no GPUs on my machine, just CPUs
Yes, there is. You can use torch.set_num_threads(...)
to specify the number of threads. Depending on the PyTorch version you use, maybe this function will not work correctly. See why in this issue. In there, you'll see that if needed you can use environment variables to limit OpenMP or MKL threads usage via OMP_NUM_THREADS=?
and MKL_NUM_THREADS=?
respectively, where ?
is the number of threads.
Keep in mind that these things are expected to run on GPUs with thousands of cores, so I would limit CPU usage only when extremely necessary.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With