I am trying to address the issue in the title:
Loaded runtime CuDNN library: 7.1.2 but source was compiled with: 7.6.0. CuDNN library major and minor version needs to match or have higher minor version in case of CuDNN 7.0 or later version
I have read several other posts (example: Loaded runtime CuDNN library: 5005 (compatibility version 5000) but source was compiled with 5103 (compatibility version 5100))
that basically tells me that my machine has CuDNN 7.1.2 but I need 7.6.0. The answer is then to download and install 7.6.*
the only issue is that I thought I did that by following the instructions on nvidia archive (https://developer.nvidia.com/rdp/cudnn-archive)
and if I go to /usr/local/cuda/include
and read cudnn.h
it shows
#if !defined(CUDNN_H_)
#define CUDNN_H_
#define CUDNN_MAJOR 7
#define CUDNN_MINOR 6
#define CUDNN_PATCHLEVEL 4
Currently I have CUDA-10.0, 10.1, and 10.2 installed with my .bashrc set to 10.0 (although nvcc --version
states I have cuda 9.1 --another issue I cant seem to fix).
Any suggestions? I have been trying to tackle this for days but no luck.
UPDATE:
Here are the paths I have
export PATH=$PATH:/usr/local/cuda-10.0/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.0/lib64
export CUDA_HOME=/usr/local/cuda
Before this is closed could you help with either suggesting a proper path to set or to find old cudnn please?
I hit a very similar error:
Loaded runtime CuDNN library: 7.1.4 but source was compiled with: 7.6.5. CuDNN library major and minor version needs to match or have higher minor version in case of CuDNN 7.0 or later version. If using a binary install, upgrade your CuDNN library. If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration.
and tracked it down to accidentally having an older CuDNN in my ldconfig
:
$ sudo ldconfig -p | grep libcudnn
libcudnn.so.7 (libc6,x86-64) => /usr/local/cuda-9.0/lib64/libcudnn.so.7
libcudnn.so.7 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libcudnn.so.7
libcudnn.so (libc6,x86-64) => /usr/local/cuda-9.0/lib64/libcudnn.so
libcudnn.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libcudnn.so
The libcudnn.so.7
file in the cuda-9.0 directory was pointing to the older version:
ls -alh /usr/local/cuda-9.0/lib64/libcudnn.so.7
lrwxrwxrwx 1 root root 17 Dec 16 2018 /usr/local/cuda-9.0/lib64/libcudnn.so.7 -> libcudnn.so.7.1.4
But I had compiled tensorflow against the newer version:
ls -alh /usr/lib/x86_64-linux-gnu/libcudnn.so.7
lrwxrwxrwx 1 root root 17 Oct 27 2019 /usr/lib/x86_64-linux-gnu/libcudnn.so.7 -> libcudnn.so.7.6.5
Since ldconfig
uses /etc/ld.so.conf
to determine where to look for libraries (I guess in conjunction with LD_LIBRARY_PATH
), I checked it and it showed:
include /etc/ld.so.conf.d/*.conf
When I listed the files in that directory, I spotted the problem file and removed it:
$ cat /etc/ld.so.conf.d/cuda9.conf
/usr/local/cuda-9.0/lib64
$ sudo rm /etc/ld.so.conf.d/cuda9.conf
After that I re-ran ldconfig
to reload the config, and then everything worked as expected and the error disappeared.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With