Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensorflow will not run on GPU

Tags:

tensorflow

gpu

I'm a newbie when it comes to AWS and Tensorflow and I've been learning about CNNs over the last week via Udacity's Machine Learning course. Now I've a need to use an AWS instance of a GPU. I launched a p2.xlarge instance of Deep Learning AMI with Source Code (CUDA 8, Ubuntu) (that's what they recommended)

But now, it seems that tensorflow is not using the GPU at all. It's still training using the CPU. I did some searching and I found some answers to this problem and none of them seemed to work.

<code>nvidia-smi</code> gives the following output:

When I run the Jupyter notebook, it still uses the CPU

What do I do to get it to run on the GPU and not the CPU?

like image 537
Pawan Bhandarkar Avatar asked Dec 24 '18 12:12

Pawan Bhandarkar


People also ask

How do I enable GPU usage in TensorFlow?

In Ubuntu 18.04 LTS, the latest conda works well in resolving dependency issues of packages for the newest version of python. Thus, all you have to do is run conda create --name tf_gpu and then conda activate tf_gpu to activate it. Then conda install tensorflow-gpu , which should do it.

Why can't TensorFlow find my GPU?

If TensorFlow doesn't detect your GPU, it will default to the CPU, which means when doing heavy training jobs, these will take a really long time to complete. This is most likely because the CUDA and CuDNN drivers are not being correctly detected in your system.

Can I run TensorFlow with GPU?

TensorFlow supports running computations on a variety of types of devices, including CPU and GPU.


1 Answers

The problem of tensorflow not detecting GPU can possibly be due to one of the following reasons.

  1. Only the tensorflow CPU version is installed in the system.
  2. Both tensorflow CPU and GPU versions are installed in the system, but the Python environment is preferring CPU version over GPU version.

Before proceeding to solve the issue, we assume that the installed environment is an AWS Deep Learning AMI having CUDA 8.0 and tensorflow version 1.4.1 installed. This assumption is derived from the discussion in comments.

To solve the problem, we proceed as follows:

  1. Check the installed version of tensorflow by executing the following command from the OS terminal.

pip freeze | grep tensorflow

  1. If only the CPU version is installed, then remove it and install the GPU version by executing the following commands.

pip uninstall tensorflow

pip install tensorflow-gpu==1.4.1

  1. If both CPU and GPU versions are installed, then remove both of them, and install the GPU version only.

pip uninstall tensorflow

pip uninstall tensorflow-gpu

pip install tensorflow-gpu==1.4.1

At this point, if all the dependencies of tensorflow are installed correctly, tensorflow GPU version should work fine. A common error at this stage (as encountered by OP) is the missing cuDNN library which can result in following error while importing tensorflow into a python module

ImportError: libcudnn.so.6: cannot open shared object file: No such file or directory

It can be fixed by installing the correct version of NVIDIA's cuDNN library. Tensorflow version 1.4.1 depends upon cuDNN version 6.0 and CUDA 8, so we download the corresponding version from cuDNN archive page (Download Link). We have to login to the NVIDIA developer account to be able to download the file, therefore it is not possible to download it using command line tools such as wget or curl. A possible solution is to download the file on host system and use scp to copy it onto AWS.

Once copied to AWS, extract the file using the following command:

tar -xzvf cudnn-8.0-linux-x64-v6.0.tgz

The extracted directory should have structure similar to the CUDA toolkit installation directory. Assuming that CUDA toolkit is installed in the directory /usr/local/cuda, we can install cuDNN by copying the files from the downloaded archive into corresponding folders of CUDA Toolkit installation directory followed by linker update command ldconfig as follows:

cp cuda/include/* /usr/local/cuda/include

cp cuda/lib64/* /usr/local/cuda/lib64

ldconfig

After this, we should be able to import tensorflow GPU version into our python modules.

A few considerations:

  • If we are using Python3, pip should be replaced with pip3.
  • Depending upon user privileges, the commands pip, cp and ldconfig may require to be run as sudo.
like image 63
T.Z Avatar answered Jan 03 '23 23:01

T.Z