i am trying to compile caffe on ubuntu14 with 750ti geforce gpu but i cant. i installed the cudnn library in /usr/local/cuda/lib64 and the cudnn.h header file in /usr/local/cuda/include still there seems to be a problem. i really think enabling the cudNN=1 in Makefile.config
# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1
thats where the problem is. What exactly are these errors?
./include/caffe/util/cudnn.hpp:65:5: error: expected primary-expression before ‘int’
int n, int c, int h, int w,
^
./include/caffe/util/cudnn.hpp:65:12: error: expected primary-expression before ‘int’
int n, int c, int h, int w,
^
./include/caffe/util/cudnn.hpp:65:19: error: expected primary-expression before ‘int’
int n, int c, int h, int w,
^
./include/caffe/util/cudnn.hpp:65:26: error: expected primary-expression before ‘int’
int n, int c, int h, int w,
^
./include/caffe/util/cudnn.hpp:66:5: error: expected primary-expression before ‘int’
int stride_n, int stride_c, int stride_h, int stride_w) {
^
./include/caffe/util/cudnn.hpp:66:19: error: expected primary-expression before ‘int’
int stride_n, int stride_c, int stride_h, int stride_w) {
^
./include/caffe/util/cudnn.hpp:66:33: error: expected primary-expression before ‘int’
int stride_n, int stride_c, int stride_h, int stride_w) {
^
./include/caffe/util/cudnn.hpp:66:47: error: expected primary-expression before ‘int’
int stride_n, int stride_c, int stride_h, int stride_w) {
^
./include/caffe/util/cudnn.hpp:72:29: error: variable or field ‘setTensor4dDesc’ declared void
inline void setTensor4dDesc(cudnnTensor4dDescriptor_t* desc,
^
./include/caffe/util/cudnn.hpp:72:29: error: ‘cudnnTensor4dDescriptor_t’ was not declared in this scope
./include/caffe/util/cudnn.hpp:72:56: error: ‘desc’ was not declared in this scope
inline void setTensor4dDesc(cudnnTensor4dDescriptor_t* desc,
^
./include/caffe/util/cudnn.hpp:73:5: error: expected primary-expression before ‘int’
int n, int c, int h, int w) {
^
./include/caffe/util/cudnn.hpp:73:12: error: expected primary-expression before ‘int’
int n, int c, int h, int w) {
^
./include/caffe/util/cudnn.hpp:73:19: error: expected primary-expression before ‘int’
int n, int c, int h, int w) {
^
./include/caffe/util/cudnn.hpp:73:26: error: expected primary-expression before ‘int’
int n, int c, int h, int w) {
^
./include/caffe/util/cudnn.hpp:97:5: error: ‘cudnnTensor4dDescriptor_t’ has not been declared
cudnnTensor4dDescriptor_t bottom, cudnnFilterDescriptor_t filter,
^
./include/caffe/util/cudnn.hpp: In function ‘void caffe::cudnn::setConvolutionDesc(cudnnConvolutionStruct**, int, cudnnFilterDescriptor_t, int, int, int, int)’:
./include/caffe/util/cudnn.hpp:100:70: error: there are no arguments to ‘cudnnSetConvolutionDescriptor’ that depend on a template parameter, so a declaration of ‘cudnnSetConvolutionDescriptor’ must be available [-fpermissive]
pad_h, pad_w, stride_h, stride_w, 1, 1, CUDNN_CROSS_CORRELATION));
^
./include/caffe/util/cudnn.hpp:11:28: note: in definition of macro ‘CUDNN_CHECK’
cudnnStatus_t status = condition; \
^
./include/caffe/util/cudnn.hpp:100:70: note: (if you use ‘-fpermissive’, G++ will accept your code, but allowing the use of an undeclared name is deprecated)
pad_h, pad_w, stride_h, stride_w, 1, 1, CUDNN_CROSS_CORRELATION));
^
./include/caffe/util/cudnn.hpp:11:28: note: in definition of macro ‘CUDNN_CHECK’
cudnnStatus_t status = condition; \
^
./include/caffe/util/cudnn.hpp: In function ‘void caffe::cudnn::createPoolingDesc(cudnnPoolingStruct**, caffe::PoolingParameter_PoolMethod, cudnnPoolingMode_t*, int, int, int, int)’:
./include/caffe/util/cudnn.hpp:119:27: error: there are no arguments to ‘cudnnSetPoolingDescriptor’ that depend on a template parameter, so a declaration of ‘cudnnSetPoolingDescriptor’ must be available [-fpermissive]
stride_h, stride_w));
^
./include/caffe/util/cudnn.hpp:11:28: note: in definition of macro ‘CUDNN_CHECK’
cudnnStatus_t status = condition; \
^
make: *** [.build_release/src/caffe/syncedmem.o] Error 1
I tested CUDA sample and GPU is working fine with cuda. Here is the result...
root@pbu-OptiPlex-740-Enhanced:/home/pbu/NVIDIA_CUDA-6.5_Samples/0_Simple/matrixMul# ./matrixMul
[Matrix Multiply Using CUDA] - Starting...
GPU Device 0: "GeForce GTX 750 Ti" with compute capability 5.0
MatrixA(320,320), MatrixB(640,320)
Computing result using CUDA Kernel...
done
Performance= 157.82 GFlop/s, Time= 0.831 msec, Size= 131072000 Ops, WorkgroupSize= 1024 threads/block
Checking computed result for correctness: Result = PASS
Note: For peak performance, please refer to the matrixMulCUBLAS example.
The cuDNN paper preprint [1] details the computational approach of the library and its integration in Caffe.
Through collaboration with NVIDIA, drop-in integration of the cuDNN library accelerates Caffe models. Follow this post to join the active deep learning community around Caffe. Deep learning is a branch of machine learning that is advancing the state of the art for perceptual problems like vision and speech recognition.
The Caffe framework from UC Berkeley is designed to let researchers create and explore CNNs and other Deep Neural Networks (DNNs) easily, while delivering high speed needed for both experiments and industrial deployment [5].
Deeper layers further enrich the representation. The Caffe framework from UC Berkeley is designed to let researchers create and explore CNNs and other Deep Neural Networks (DNNs) easily, while delivering high speed needed for both experiments and industrial deployment [5].
A version of Caffe that works with cuDNNv2 is available from S. Layton's github page.
His Caffe master branch works with cuDNNv2. You can download it from the github page.
He made a pull request to the official Caffe github and the full discussion is available here if you want the details.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With