Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Install Cuda without root

Tags:

I know that I can install Cuda with the following:

 wget http://developer.download.nvidia.com/compute/cuda/7_0/Prod/local_installers/cuda_7.0.28_linux.run chmod +x cuda_7.0.28_linux.run ./cuda_7.0.28_linux.run -extract=`pwd`/nvidia_installers cd nvidia_installers sudo ./NVIDIA-Linux-x86_64-346.46.run  sudo modprobe nvidia sudo ./cuda-linux64-rel-7.0.28-19326674.run  

Just wondering if I can install Cuda without root?

Thanks,

like image 552
user200340 Avatar asked Sep 07 '16 22:09

user200340


People also ask

Can I PIP install CUDA?

The reason why cudatoolkit is not available in pypi is because it's not a python packge. It is a toolkit from nvidia that needs a C compiler to exist in your system. Pip was never intended to handle such cases, whereas Anaconda is.

Can I install CUDA without GPU?

The answer to your question is YES. The nvcc compiler driver is not related to the physical presence of a device, so you can compile CUDA codes even without a CUDA capable GPU.


2 Answers

Update The installation UI for 10.1 changed. The following works:

  • Deselect driver installation (pressing ENTERon it)
  • Change options -> root install path to a non-sudo directory.
  • Press A on the line marked with a + to access advanced options. Deselect create symbolic link, and change the toolkit install path.
  • Now installation should work without root permissions

Thank you very much for the hints in the question! I just want to complete it with an approach that worked for me, also inspired in this gist and that hopefully helps in situations where a valid driver is installed, and installing a more recent CUDA on Linux without root permissions is still needed.

TL;DR: Here are the steps to install CUDA9+CUDNN7 on Debian, and installing a pre-compiled version of TensorFlow1.4 on Python2.7 to test that everything works. Everything without root privileges and via terminal. Should also work for other CUDA, CUDNN, TensorFlow and Python versions on other Linux systems too.


INSTALLATION

  1. Go to NVIDIA's official release web for CUDA (as for Nov. 2017, CUDA9 is out): https://developer.nvidia.com/cuda-downloads.

  2. Under your Linux distro, select the runfile (local)option. Note that the sudo indication present in the installation instructions is deceiving, since it is possible to run this installer without root permissions. On a server, one easy way is to copy the <LINK> of the Download button and, in any location of your home directory, run wget <LINK>. It will download the <INSTALLER> file.

  3. Run chmod +x <INSTALLER> to make it executable, and execute it ./<INSTALLER>.

  4. accept the EULA, say no to driver installation, and enter a <CUDA> location under your home directory to install the toolkit and a <CUDASAMPLES> for the samples.

  5. Not asked here but recommended: Download a compatible CUDNN file from the official web (you need to sign in). In my case, I downloaded the cudnn-9.0-linux-x64-v7.tgz, compatible with CUDA9 into the <CUDNN> folder. Uncompress it: tar -xzvf ....

  6. Optional: compile the samples. cd <CUDASAMPLES> && make. There are some very nice examples there and a very good starting point to write some CUDA scripts of yourself.

  7. (If you did 5.): Copy the required files from <CUDNN> into <CUDA>, and grant reading permission to user (not sure if needed):

cp -P <CUDNN>/cuda/include/cudnn.h <CUDA>/include/ cp -P <CUDNN>/cuda/lib64/libcudnn* <CUDA>/lib64 chmod a+r <CUDA>/include/cudnn.h <CUDA>/lib64/libcudnn* 
  1. Add the library to your environment. This is typically done adding this following two lines to your ~/.bashrc file (in this example, the <CUDA> directory was ~/cuda9/:
export PATH=<CUDA>/bin:$PATH export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<CUDA>/lib64/ 

FOR QUICK TESTING OR TENSORFLOW USERS

The quickest way to get a TensorFlow compatible with CUDA9 and CUDNN7 (and a very quick way to test this) is to download a precompiled wheel file and install it with pip install <WHEEL>. Most of the versions you need, can be found in mind's repo (thanks a lot guys). A minimal test that confirms that CUDNN is also working involves the use of tf.nn.conv2d:

import tensorflow as tf x = tf.nn.conv2d(tf.ones([1,1,10,1]), tf.ones([1,5,1,1]), strides=[1, 1, 1, 1], padding='SAME') with tf.Session() as sess:     sess.run(x) # this should output a tensor of shape (1,1,10,1) with [3,4,5,5,5,5,5,5,4,3] 

In my case, the wheel I installed required Intel's MKL library, as explained here. Again, from terminal and without root users, this are the steps I followed to install the library and make TensorFlow find it (reference):

  1. git clone https://github.com/01org/mkl-dnn.git
  2. cd mkl-dnn/scripts && ./prepare_mkl.sh && cd ..
  3. mkdir -p build && cd build
  4. cmake -D CMAKE_INSTALL_PREFIX:PATH=<TARGET_DIR_IN_HOME> ..
  5. make # this takes a while
    1. make doc # do this optionally if you have doxygen
  6. make test # also takes a while
  7. make install # installs into <TARGET_DIR_IN_HOME>
  8. add the following to your ~/.bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<TARGET_DIR_IN_HOME>/lib

Hope this helps!
Andres

like image 174
fr_andres Avatar answered Sep 21 '22 14:09

fr_andres


You can install using conda with the following command.

conda install -c anaconda cudatoolkit 

But you need to have prior accesss to the device(GPU).

EDIT : If you are finding error in anaconda repository then change the repository to conda-forge which is frequently updated.

conda install -c conda-forge cudatoolkit 
like image 20
Trect Avatar answered Sep 20 '22 14:09

Trect