I have machine with several GPUs. My idea is to attach them to different docker instances in order to use that instances in CUDA (or OpenCL) calculations.
My goal is to setup docker image with quite old Ubuntu and quite old AMD video drivers (13.04). Reason is simple: upgrade to newer version of driver will broke my OpenCL program (due to buggy AMD linux drivers).
So question is following. Is it possible to run docker image with old Ubuntu, old kernel (3.14 for example) and old AMD (fglrx
) driver on fresh Arch Linux setup with fresh kernel 4.2 and newer AMD (fglrx
) drivers in repository?
P.S. I tried this answer (with Nvidia cards) and unfortunately deviceQuery
inside docker image doesn't see any CUDA devices (as It happened with some commentors of original answer)...
P.P.S. My setup:
GPUs:
1 x Radeon HD 7970
$ lspci -nn | grep Rad
83:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X] [1002:6798]
83:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT HDMI Audio [Radeon HD 7970 Series] [1002:aaa0]
2 x GeForce GTX Titan Black
Docker never uses a different kernel: the kernel is always your host kernel. If your host kernel is "compatible enough" with the software in the container you want to run it will work; otherwise it won't.
You should be able to successfully run nvidia-smi and see your GPU's name, driver version, and CUDA version. To use your GPU with Docker, begin by adding the NVIDIA Container Toolkit to your host. This integrates into Docker Engine to automatically configure your containers for GPU support.
Key points: Linux containers support all graphics APIs for NVIDIA GPUs using the NVIDIA Container Toolkit. Windows containers support DirectX-based graphics APIs for GPUs from all vendors using native hardware acceleration support.
The Docker engine doesn't natively support NVIDIA GPUs as it uses specialized hardware that requires the NVIDIA driver to be installed. This is our experience of using a graphics processing unit to build and run Docker containers and a step-by-step description of how this was achieved.
With docker you rely on virtualization on Operating System level. That means you use the same kernel in all containers. If you wish to run different kernels for each container, you'll probably have to use system-level virtualization, e.g., KVM, VirtualBox. If your setup supports Intel's VT-d, you can pass the GPU as a PCIe device to the container(better terminology in this case is, Virtual Machine).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With