Kubernetes supports GPUs as an experimental feature. Does it work in google container engine? Do I need to have some special configuration to enable it? I want to be able to run machine learning workloads, but want to use Python 3 which isn't available in CloudML.
All the benefits of Google CloudRun GPU workloads on Google Cloud Platform where you have access to industry-leading storage, networking, and data analytics technologies.
Google Cloud Platform (GCP) is the world's third largest cloud provider. Google offers a number of virtual machines (VMs) that provide graphical processing units (GPUs), including the NVIDIA Tesla K80, P4, T4, P100, and V100.
To use GPUs in Kubernetes, the NVIDIA Device Plugin is required. The NVIDIA Device Plugin is a daemonset that automatically enumerates the number of GPUs on each node of the cluster and allows pods to be run on GPUs.
Hypervisors support GPUs in either pass-through or virtual GPU (vGPU) modes. GPU support often must be enabled deliberately and added to VM configurations before the VM can use those GPU capabilities.
GPUs on Google Container Engine are now available in Alpha. Sign up form.
Beware that alpha cluster limitations apply: they cannot be upgraded, and they will be auto-deleted in 30 days.
Disclaimer: I work at GCP.
I am afraid this is not supported out of the box. When creating a regular instance in Google Compute Engine (GCE) you are able to select GPU specs for your machine. On the other side, when creating a cluster, these options are not available. I imagine that this will be available sooner or later, but not at the moment.
As an alternative, you can create several GCE instances and build a cluster using tools like kubeadm or following guides like Kubernetes the hard way: https://github.com/kelseyhightower/kubernetes-the-hard-way
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With