Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

TensorFlow is not using my M1 MacBook GPU during training

I have installed tensorflow-macos and while training this is my CPU usage and GPU usage .

Can I make Tensorflow run on GPU anyway?

like image 755
hat Avatar asked May 02 '21 03:05

hat


People also ask

Does TensorFlow use GPU on M1 Mac?

Finally, to sum up, all you need to get TensorFlow running with GPU support on your M1 or M2 Mac is to install hdf5 through Homebrew and then install both tensorflow-macos and tensorflow-metal through pip.

Can TensorFlow use GPU on Mac?

Apple has just announced today that Mac users are able to accelerate training on the GPU. See the announcements below: Apple announcement.


3 Answers

I've been setting up my new M1 machine today and was looking for a test such as that provided by Aman Anand already here. It successfully runs on the GPU after following the standard instructions provided in #153 using a miniforge package manager installed using Homebrew and an environment cloned from the YAML file in the #153 guide.

Process running on GPU

I also ran the smaller simpler snippet as followed which only runs on CPU, '% GPU' == 0%:

import numpy as np
import tensorflow as tf

### Aman's code to enable the GPU
#from tensorflow.python.compiler.mlcompute import mlcompute
#tf.compat.v1.disable_eager_execution()
#mlcompute.set_mlc_device(device_name='gpu')
#print("is_apple_mlc_enabled %s" % mlcompute.is_apple_mlc_enabled())
#print("is_tf_compiled_with_apple_mlc %s" % #mlcompute.is_tf_compiled_with_apple_mlc())
#print(f"eagerly? {tf.executing_eagerly()}")
#print(tf.config.list_logical_devices())

x = np.random.random((10000, 5))
y = np.random.random((10000, 2))

x2 = np.random.random((2000, 5))
y2 = np.random.random((2000, 2))

inp = tf.keras.layers.Input(shape = (5,))
l1 = tf.keras.layers.Dense(256, activation = 'sigmoid')(inp)
l1 = tf.keras.layers.Dense(256, activation = 'sigmoid')(l1)
l1 = tf.keras.layers.Dense(256, activation = 'sigmoid')(l1)
l1 = tf.keras.layers.Dense(256, activation = 'sigmoid')(l1)
l1 = tf.keras.layers.Dense(256, activation = 'sigmoid')(l1)
o = tf.keras.layers.Dense(2, activation = 'sigmoid')(l1)

model = tf.keras.models.Model(inputs = [inp], outputs = [o])
model.compile(optimizer = "Adam", loss = "mse")

model.fit(x, y, validation_data = (x2, y2), batch_size = 500, epochs = 500)

Training not using GPU

Uncommenting the line's added from Aman's code and rerunning makes the GPU work again:

Training uses GPU again

If these scripts still don't use the GPU per the activity monitor (set the update rate to 1s in view/update_frequency), go back to the #153 page to start agin from a clean slate and follow the instructions carefully, and be sure to ignore the instructions meant for Intel/X86.

My steps:

  1. install xcode (from the app store)
  2. install Homebrew (don't forget to set the PATH as recommended just after installation finishes, terminal then needs restarting or your shell profile reloading)
  3. install miniforge ("brew install miniforge")
  4. copy environment.yaml file and clone as new conda environment with the command given in #153.
  5. profit.

UPDATE 2022-01-26:

The workflow to install tensorflow on apple silicon has become much easier in the last 6 months, it still relies on miniforge but the packages are distributed through conda and pip from a standard conda env rather than having to create one from the yaml file. These instructions are very easy to follow and should have you going in under 2 minutes. The only exception being that I had to run one additional command to install openblas afterwards through conda to make it work.

My test above breaks in tensorflow 2.7 because they changed something to do with the mlcompute location for m1 but go on to say that the mlcompute no longer needs to instruct the use of the GPU with the Metal plugin and the test works again by simply removing the references to mlcompute in lines 5-10, and runs on the GPU as can be seen in activity monitor.

like image 96
G.S Avatar answered Oct 07 '22 18:10

G.S


This issue has already been fixed with the release of TensorFlow-macos 2.5. The easiest way to utilize GPU for Tensorflow on Mac M1 is to create a new conda miniforge3 ARM64 environment and run the following 3 commands to install TensorFlow and its dependencies:

conda install -c apple tensorflow-deps
python -m pip install tensorflow-macos
python -m pip install tensorflow-metal

Further instructions are on this page: https://developer.apple.com/metal/tensorflow-plugin/

"Accelerate training of machine learning models with TensorFlow right on your Mac. Install TensorFlow v2.5 and the tensorflow-metal PluggableDevice to accelerate training with Metal on Mac GPUs."

like image 3
Long Le Avatar answered Oct 07 '22 18:10

Long Le


You can, but it's a bit of a pain as of now, it appears. One solution is to use mini-forge. If you use conda you need to uninstall that first.

  1. Install Xcode and the Command Line Tools package.
  2. Install Miniforge to get conda.
  3. Install Apple's fork of TensorFlow from conda-forge in a conda environment and other required packages.

My answer is based on this helpful guide: https://medium.com/gft-engineering/macbook-m1-tensorflow-on-jupyter-notebooks-6171e1f48060

This issue on Apple's GitHub has more discussion: https://github.com/apple/tensorflow_macos/issues/153

like image 2
jeffhale Avatar answered Oct 07 '22 19:10

jeffhale