My machine has Geforce 940mx GDDR5 GPU.
I have installed all requirements to run GPU accelerated dlib (with GPU support):
CUDA 9.0 toolkit with all 3 patches updates from https://developer.nvidia.com/cuda-90-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exelocal
cuDNN 7.1.4
Then I executed all those below command after cloning dlib/davisKing repository on Github for compliling dlib with GPU support :
$ git clone https://github.com/davisking/dlib.git
$ cd dlib
$ mkdir build
$ cd build
$ cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1
$ cmake --build .
$ cd ..
$ python setup.py install --yes USE_AVX_INSTRUCTIONS --yes DLIB_USE_CUDA
Now how could I possibly check/confirm if dlib(or other libraries depend on dlib like face_recognition of Adam Geitgey) is using GPU inside python shell/Anaconda(jupyter Notebook)?
dlib uses low GPU despite DLIB_USE_CUDA is true on Jetson Nano #2263.
Then open command prompt and do the same as step 2 which is, write this command "python -m pip install" after this command space first then paste the link copied. then the dlib will be installed successfully. After that, type python and enter, then type import dlib to check dlib is installed perfectly.
Dlib is a general purpose cross-platform software library written in the programming language C++. Its design is heavily influenced by ideas from design by contract and component-based software engineering.
Running dlib via Python should be using my GPU, not CPU (Haven't tried dlib examples in C++ yet, currently building. I suppose python is a wrapper, which invokes the C++ code, so python examples should also be the same behavior)
In particular, many users report that "dlib isn't using CUDA even though I definitely compiled it with CUDA" and in every case either they are not using part of dlib that uses CUDA or they have installed multiple copies of dlib on their computer, some with CUDA disabled, and they are using a non-CUDA build.
The only way to use features in a new version of dlib is to get the new version of dlib. Often people think they have the new version of dlib installed when really they have some old version installed.
You can start by installing any of the monitoring softwares like GPUz , CPUID or AIDA 64 to fetch all the hardware information of your computer. This way you can read the information regarding which is integrated and which is dedicated. Information can also be seen from the graphic card's driver.
In addition to the previous answer using command,
dlib.DLIB_USE_CUDA
There are some alternative ways to make sure if dlib is actually using your GPU.
Easiest way to check it is to check if dlib recognizes your GPU.
import dlib.cuda as cuda
print(cuda.get_num_devices())
If the number of devices is >= 1 then dlib can use your device.
Another useful trick is to run your dlib code and at the same time run
$ nvidia-smi
This should give you full GPU utilization information where you can se ethe total utilization together with memory usage of each process separately.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.48 Driver Version: 410.48 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 Off | 00000000:01:00.0 On | N/A |
| 0% 52C P2 36W / 151W | 763MiB / 8117MiB | 5% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1042 G /usr/lib/xorg/Xorg 18MiB |
| 0 1073 G /usr/bin/gnome-shell 51MiB |
| 0 1428 G /usr/lib/xorg/Xorg 167MiB |
| 0 1558 G /usr/bin/gnome-shell 102MiB |
| 0 2113 G ...-token=24AA922604256065B682BE6D9A74C3E1 33MiB |
| 0 3878 C python 385MiB |
+-----------------------------------------------------------------------------+
In some cases the Processes box might say something like "processes are not supported", this does not mean your GPU cannot run code but it does not just support this kind of logging.
If dlib.DLIB_USE_CUDA is true then it's using cuda, if it's false then it isn't.
As an aside, these steps do nothing and are not needed to use python:
$ mkdir build
$ cd build
$ cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1
$ cmake --build .
Just running setup.py is all you need to do.
The following snippets have been simplified to either use or check whether dlib is using GPU or not.
First, Check whether dlib identifies your GPU or not.
import dlib.cuda as cuda;
print(cuda.get_num_devices());
Secondly, dlib.DLIB_USE_CUDA
if it's false, simply allow it to use GPU support by
dlib.DLIB_USE_CUDA = True
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With