According to
https://www.tensorflow.org/install/install_mac Note: As of version 1.2, TensorFlow no longer provides GPU support on Mac OS X. GPU support for OS X is no longer provided.
However, I would want to run an e-gpu setup like akitio node with a 1080 ti via thunderbolt 3.
What steps are required to get this setup to work? So far I know that
are required. What else is needed to get CUDA / tensorflow to work?
Apple has just announced today that Mac users are able to accelerate training on the GPU. See the announcements below: Apple announcement.
Conclusion. Today you've successfully installed TensorFlow with GPU support on an M1 Pro MacBook. The above step-by-step guide should work on any Apple Silicon device, from Mac Mini to M1 Max.
Note: TensorFlow binaries use AVX instructions which may not run on older CPUs. The following GPU-enabled devices are supported: NVIDIA® GPU card with CUDA® architectures 3.5, 5.0, 6.0, 7.0, 7.5, 8.0 and higher. See the list of CUDA®-enabled GPU cards.
CUDA in particular is unlikely to ever be supported on ARM-based Apple Systems: NVIDIA has exited the Apple market, after years of disagreements with Apple. Sorry, but you simply will not be able to use your M1 MacBook Pro for this purpose.
I wrote a little tutorial on compiling TensorFlow 1.2 with GPU support on macOS. I think it's customary to copy relevant parts to SO, so here it goes:
pip install tensorflow-gpu
. Once you get that working, the CUDA set-up would also work if you’re compiling TensorFlow. If you have an external GPU, YellowPillow's answer (or mine) might help you get things set up.git checkout r1.0
with git checkout r1.2
. When doing ./configure
, pay attention to the Python library path: it sometimes suggests an incorrect one. I chose the default options in most cases, except for: Python library path, CUDA support and compute capacity. Don’t use Clang as the CUDA compiler: this will lead you to an error “Inconsistent crosstool configuration; no toolchain corresponding to 'local_darwin' found for cpu 'darwin'.”. Using /usr/bin/gcc
as your compiler will actually use Clang that comes with macOS / XCode. Below is my full configuration.tensorflow/third_party/gpus/cuda/BUILD.tpl
, which contained linkopts = [“-lgomp”]
(but the location of the line might obviously change). Some people had issues with zmuldefs, but I assume that was with earlier versions; thanks to udnaan for pointing out that it’s OK to comment out these lines.Using python library path: /Users/m/code/3rd/conda/envs/p3gpu/lib/python3.6/site-packages
Do you wish to build TensorFlow with MKL support? [y/N] N
No MKL support will be enabled for TensorFlow
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N]
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]
No XLA support will be enabled for TensorFlow
Do you wish to build TensorFlow with VERBS support? [y/N]
No VERBS support will be enabled for TensorFlow
Do you wish to build TensorFlow with OpenCL support? [y/N]
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] y
CUDA support will be enabled for TensorFlow
Do you want to use clang as CUDA compiler? [y/N]
nvcc will be used as CUDA compiler
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]:
Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Please specify the cuDNN version you want to use. [Leave empty to use system default]:
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "3.5,5.2"]: 6.1
INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes.
Configuration finished
Assuming that you have already setup your eGPU box and attached the TB3 cable from the eGPU to your TB3 port:
1. Download the automate-eGPU script and run it
curl -o ~/Desktop/automate-eGPU.sh https://raw.githubusercontent.com/goalque/automate-eGPU/master/automate-eGPU.sh && chmod +x ~/Desktop/automate-eGPU.sh && cd ~/Desktop && sudo ./automate-eGPU.sh
You might get an error saying:
"Boot into recovery partition and type: csrutil disable"
All you need to do now is to restart your computer and when it's restarting hold down cmd + R
to enable the recovery mode. Then locate the Terminal while in recovery mode and type in:
csrutil disable
Then restart your computer and re-run the automate-eGPU.sh
script
2: Download and installing CUDA
Run the cuda_8.0.61_mac.dmg
file and follow through the installation phase. Then afterwards you will need to set the paths.
Go to your Terminal and type:
vim ~/.bash_profile
Or whether you have stored your environmental variables and then add these three lines:
export CUDA_HOME=/usr/local/cuda export DYLD_LIBRARY_PATH="$CUDA_HOME/lib:$CUDA_HOME:$CUDA_HOME/extras/CUPTI/lib" export LD_LIBRARY_PATH=$DYLD_LIBRARY_PATH
3. Downloading and installing cuDNN
To download cuDNN is a bit more troublesome you have to sign up to be a developer for Nvidia and then afterwards you can download it. Make sure to download cuDNN v5.1 Library for OSX
as it's the one that Tensorflow v1.1 expects Note that we can't use Tensorflow v1.2 as there is no GPU support for Macs :((
[![enter image description here][1]][1]
Now you will download a zip file called cudnn-8.0-osx-x64-v5.1.tgz
, unzip and, which will create a file called cuda
and cd to it using terminal. Assuming that the folder is in Downloads
Open terminal and type:
cd ~/Downloads/cuda
Now we need to copy cuDNN
files to where CUDA
is stored so:
sudo cp include/* /usr/local/cuda/include/ sudo cp lib/* /usr/local/cuda/lib/
4. Now install Tensorflow-GPU v1.1 in your conda/virtualenv
For me since I use conda
I created a new environment using Terminal:
conda create -n egpu python=3 source activate egpu pip install tensorflow-gpu # should install version 1.1
5. Verify that it works
First you have to restart your computer then:
In terminal type python
and enter:
import tensorflow as tf with tf.device('/gpu:0'): a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) with tf.Session() as sess: print (sess.run(c))
If you have a GPU this should run with no problem, if it does then you should get a stack trace (just a bunch of error messages) and it should include
Cannot assign a device to node 'MatMul': Could not satisfy explicit device specification '/device:GPU:0' because no devices matching that specification are registered in this process
If not then you're done congratz! I just got mine set up today and it's working perfectly :)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With