To use your GPU with Docker, begin by adding the NVIDIA Container Toolkit to your host. This integrates into Docker Engine to automatically configure your containers for GPU support. The Container Toolkit should now be operational. You're ready to start a test container.
As of May 2011, Windows does not support GPU access from within a Docker container. The docker software is only supported by Linux platforms, not Apple or Windows.
Using the NVIDIA Container Runtime for DockerUse docker run and specify runtime=nvidia . Use nvidia-docker run . The new package provides backward compatibility, so you can still run GPU-accelerated containers by using this command, and the new runtime will be used. Use docker run with nvidia as the default runtime.
If you want to maximize your GPU utilization, you can configure time-sharing for each multi-instance GPU partition. You can then run multiple containers on each partition, with those containers sharing access to the resources on that partition.
Regan's answer is great, but it's a bit out of date, since the correct way to do this is avoid the lxc execution context as Docker has dropped LXC as the default execution context as of docker 0.9.
Instead it's better to tell docker about the nvidia devices via the --device flag, and just use the native execution context rather than lxc.
These instructions were tested on the following environment:
See CUDA 6.5 on AWS GPU Instance Running Ubuntu 14.04 to get your host machine setup.
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update && sudo apt-get install lxc-docker
ls -la /dev | grep nvidia
crw-rw-rw- 1 root root 195, 0 Oct 25 19:37 nvidia0
crw-rw-rw- 1 root root 195, 255 Oct 25 19:37 nvidiactl
crw-rw-rw- 1 root root 251, 0 Oct 25 19:37 nvidia-uvm
I've created a docker image that has the cuda drivers pre-installed. The dockerfile is available on dockerhub if you want to know how this image was built.
You'll want to customize this command to match your nvidia devices. Here's what worked for me:
$ sudo docker run -ti --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm tleyden5iwx/ubuntu-cuda /bin/bash
This should be run from inside the docker container you just launched.
Install CUDA samples:
$ cd /opt/nvidia_installers
$ ./cuda-samples-linux-6.5.14-18745345.run -noprompt -cudaprefix=/usr/local/cuda-6.5/
Build deviceQuery sample:
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
$ make
$ ./deviceQuery
If everything worked, you should see the following output:
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GRID K520
Result = PASS
Writing an updated answer since most of the already present answers are obsolete as of now.
Versions earlier than Docker 19.03
used to require nvidia-docker2
and the --runtime=nvidia
flag.
Since Docker 19.03
, you need to install nvidia-container-toolkit
package and then use the --gpus all
flag.
So, here are the basics,
Package Installation
Install the nvidia-container-toolkit
package as per official documentation at Github.
For Redhat based OSes, execute the following set of commands:
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
$ sudo yum install -y nvidia-container-toolkit
$ sudo systemctl restart docker
For Debian based OSes, execute the following set of commands:
# Add the package repositories
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
$ sudo systemctl restart docker
Running the docker with GPU support
docker run --name my_all_gpu_container --gpus all -t nvidia/cuda
Please note, the flag --gpus all
is used to assign all available gpus to the docker container.
To assign specific gpu to the docker container (in case of multiple GPUs available in your machine)
docker run --name my_first_gpu_container --gpus device=0 nvidia/cuda
Or
docker run --name my_first_gpu_container --gpus '"device=0"' nvidia/cuda
Ok i finally managed to do it without using the --privileged mode.
I'm running on ubuntu server 14.04 and i'm using the latest cuda (6.0.37 for linux 13.04 64 bits).
Install nvidia driver and cuda on your host. (it can be a little tricky so i will suggest you follow this guide https://askubuntu.com/questions/451672/installing-and-testing-cuda-in-ubuntu-14-04)
ATTENTION : It's really important that you keep the files you used for the host cuda installation
We need to run docker daemon using lxc driver to be able to modify the configuration and give the container access to the device.
One time utilization :
sudo service docker stop
sudo docker -d -e lxc
Permanent configuration Modify your docker configuration file located in /etc/default/docker Change the line DOCKER_OPTS by adding '-e lxc' Here is my line after modification
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 -e lxc"
Then restart the daemon using
sudo service docker restart
How to check if the daemon effectively use lxc driver ?
docker info
The Execution Driver line should look like that :
Execution Driver: lxc-1.0.5
Here is a basic Dockerfile to build a CUDA compatible image.
FROM ubuntu:14.04
MAINTAINER Regan <http://stackoverflow.com/questions/25185405/using-gpu-from-a-docker-container>
RUN apt-get update && apt-get install -y build-essential
RUN apt-get --purge remove -y nvidia*
ADD ./Downloads/nvidia_installers /tmp/nvidia > Get the install files you used to install CUDA and the NVIDIA drivers on your host
RUN /tmp/nvidia/NVIDIA-Linux-x86_64-331.62.run -s -N --no-kernel-module > Install the driver.
RUN rm -rf /tmp/selfgz7 > For some reason the driver installer left temp files when used during a docker build (i don't have any explanation why) and the CUDA installer will fail if there still there so we delete them.
RUN /tmp/nvidia/cuda-linux64-rel-6.0.37-18176142.run -noprompt > CUDA driver installer.
RUN /tmp/nvidia/cuda-samples-linux-6.0.37-18176142.run -noprompt -cudaprefix=/usr/local/cuda-6.0 > CUDA samples comment if you don't want them.
RUN export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64 > Add CUDA library into your PATH
RUN touch /etc/ld.so.conf.d/cuda.conf > Update the ld.so.conf.d directory
RUN rm -rf /temp/* > Delete installer files.
First you need to identify your the major number associated with your device. Easiest way is to do the following command :
ls -la /dev | grep nvidia
If the result is blank, use launching one of the samples on the host should do the trick. The result should look like that As you can see there is a set of 2 numbers between the group and the date. These 2 numbers are called major and minor numbers (wrote in that order) and design a device. We will just use the major numbers for convenience.
Why do we activated lxc driver? To use the lxc conf option that allow us to permit our container to access those devices. The option is : (i recommend using * for the minor number cause it reduce the length of the run command)
--lxc-conf='lxc.cgroup.devices.allow = c [major number]:[minor number or *] rwm'
So if i want to launch a container (Supposing your image name is cuda).
docker run -ti --lxc-conf='lxc.cgroup.devices.allow = c 195:* rwm' --lxc-conf='lxc.cgroup.devices.allow = c 243:* rwm' cuda
We just released an experimental GitHub repository which should ease the process of using NVIDIA GPUs inside Docker containers.
Recent enhancements by NVIDIA have produced a much more robust way to do this.
Essentially they have found a way to avoid the need to install the CUDA/GPU driver inside the containers and have it match the host kernel module.
Instead, drivers are on the host and the containers don't need them. It requires a modified docker-cli right now.
This is great, because now containers are much more portable.
A quick test on Ubuntu:
# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi
For more details see: GPU-Enabled Docker Container and: https://github.com/NVIDIA/nvidia-docker
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With