Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I make tensorflow run on a GPU with capability 2.x?

I've successfully installed tensorflow (GPU) on Linux Ubuntu 16.04 and made some small changes in order to make it work with the new Ubuntu LTS release.

However, I thought (who knows why) that my GPU met the minimum requirement of a compute capability greater than 3.5. That was not the case since my GeForce 820M has just 2.1. Is there a way of making tensorflow GPU version working with my GPU?

I am asking this question since apparently there was no way of making tensorflow GPU version working on Ubuntu 16.04 but by searching the internet I found out that was not the case and indeed I made it almost work were it not for this unsatisfied requirement. Now I am wondering if this issue with GPU compute capability could be fixed as well.

like image 751
mickkk Avatar asked Jul 23 '16 14:07

mickkk


People also ask

Does TensorFlow 2.0 support GPU?

Hardware requirements. Note: TensorFlow binaries use AVX instructions which may not run on older CPUs. The following GPU-enabled devices are supported: NVIDIA® GPU card with CUDA® architectures 3.5, 5.0, 6.0, 7.0, 7.5, 8.0 and higher.

Does TensorFlow support multi GPU?

TensorFlow provides strong support for distributing deep learning across multiple GPUs. TensorFlow is an open source platform that you can use to develop and train machine learning and deep learning models. TensorFlow operations can leverage both CPUs and GPUs.


2 Answers

Sep.2017 Update: No way to do that without problems and pains. I've tried hard all the ways and even apply below trick to force it run but finally I had to give up. If you are serious with Tensorflow just go ahead and buy 3.0 compute capability GPU.

This is a trick to force tensorflow run on 2.0 compute capability GPU (not officially):

  1. Find the file in Lib/site-packages/tensorflow/python/_pywrap_tensorflow_internal.pyd (orLib/site-packages/tensorflow/python/_pywrap_tensorflow.pyd)
  2. Open it with Notepad++ or something similar

  3. Search for the first occurence of 3\.5.*5\.2 using regex

  4. You see the 3.0 before 3.5*5.2, change it to 2.0

I changed as above and can do simple calculation with GPU, but get stuck with strange and unknown issues when try with practical projects(those projects run well with 3.0 compute capability GPU)

like image 37
Tin Luu Avatar answered Sep 22 '22 08:09

Tin Luu


Recent GPU versions of tensorflow require compute capability 3.5 or higher (and use cuDNN to access the GPU.

cuDNN also requires a GPU of cc3.0 or higher:

cuDNN is supported on Windows, Linux and MacOS systems with Pascal, Kepler, Maxwell, Tegra K1 or Tegra X1 GPUs.

  • Kepler = cc3.x
  • Maxwell = cc5.x
  • Pascal = cc6.x
  • TK1 = cc3.2
  • TX1 = cc5.3

Fermi GPUs (cc2.0, cc2.1) are not supported by cuDNN.

Older GPUs (e.g. compute capability 1.x) are also not supported by cuDNN.

Note that there has never been either a version of cuDNN or any version of TF that officially supported NVIDIA GPUs less than cc3.0. The initial version of cuDNN started out by requiring cc3.0 GPUs, and the initial version of TF started out by requiring cc3.0 GPUs.

like image 86
Robert Crovella Avatar answered Sep 26 '22 08:09

Robert Crovella