Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use Keras with GPU?

I've successfully installed TensorFlow with GPU. When I run the following script I get this result:

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:140]
Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2018-03-26
Found device 0 with properties: name: GeForce GTX 970 major: 5 minor: 2 memoryClockRate(GHz): 1.253 pciBusID: 0000:01:00.0 totalMemory: 4.00GiB freeMemory: 3.31GiB 2018-03-26 11:47:03.186046: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1312]
Adding visible gpu devices: 0 2018-03-26 11:47:04.062049: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:993]
Creating TensorFlow device (/device:GPU:0 with 3043 MB memory) -> physical GPU (device: 0, name: GeForce GTX 970, pci bus id: 0000:01:00.0, compute capability: 5.2) [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 8082333747214375667 , name: "/device:GPU:0" device_type: "GPU" memory_limit: 3190865920 locality { bus_id: 1 } incarnation: 1190887510488091263 physical_device_desc: "device: 0, name: GeForce GTX 970, pci bus id: 0000:01:00.0, compute capability: 5.2" ]

If I run a CNN in Keras, for example, will it automatically use the GPU? Or do I have to write some code to force Keras into using the GPU?

For example, with the MNIST dataset, how would I use the GPU?

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])

model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test))
like image 935
user2505650 Avatar asked Mar 26 '18 09:03

user2505650


1 Answers

You don't have to explicitly tell to Keras to use the GPU. If a GPU is available (and from your output I can see it's the case) it will use it.

You could also check this empirically by looking at the usage of the GPU during the model training: if you're on Windows 10 you only need to open the task manager and look under the 'Performance' tab (see here).

like image 138
Alberto Re Avatar answered Oct 13 '22 11:10

Alberto Re