Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Storage ReadWriteMany in Google Kubernetes Engine

Is there a way to be able to provide ReadWriteMany storage without having to implement a storage cluster?

I was able to provide storage with gcsfuse but it is really slow. I need something close to the speed of GlusterFS.

I am currently using GlusterFS.

like image 543
jbelenus Avatar asked Nov 28 '17 15:11

jbelenus


People also ask

How does storage class work in Kubernetes?

A Kubernetes StorageClass is a Kubernetes storage mechanism that lets you dynamically provision persistent volumes (PV) in a Kubernetes cluster. Kubernetes administrators define classes of storage, and then pods can dynamically request the specific type of storage they need.

Does Kubernetes have storage?

By default, Kubernetes storage is temporary (non-persistent). Any storage defined as part of a container in a Kubernetes Pod, is held in the host's temporary storage space, which exists as long as the pod exists, and is then removed. Container storage is portable, but not durable.


2 Answers

Another option: Google Cloud Platform recently started offering a hosted NFS service called Cloud Firestore.

Note that as of this writing, Cloud Firestore is still in Beta.

Here's the description:

Use Cloud Filestore to create fully managed NFS file servers on Google Cloud Platform (GCP) for use with applications running on Compute Engine virtual machines (VMs) instances or Kubernetes Engine clusters.

Create and manage Cloud Filestore instances by using the GCP console or the gcloud command-line tool, and interact with the NFS fileshare on the instance by using standard operating system commands.

like image 154
Luciano Avatar answered Oct 09 '22 20:10

Luciano


You could create an NFS server, and then mount the storage from the server to your nodes/pods. This supports ReadWriteMany as you require. I'm unsure if it's faster or slower than GlusterFS, although this suggests it is faster (with async i.e. the default settings).

You would first need to create an NFS server to provide the storage. The easiest way to do this is to launch a single node file server. There is a 'click to deploy' option for simplicity that you can navigate to from this page.

The shared storage on the NFS server must be exported on the machine before the nodes in the cluster can access it. SSH into the machine, and edit the /etc/exports file, adding an entry with the IP addresses that require access to the machines storage. Once the /etc/exports file has been configured, you need to restart the nfs service:

sudo systemctl restart nfs-kernel-server.service

There is a good example here of incorporating the NFS server with Kubernetes pods/nodes.

like image 1
neilH Avatar answered Oct 09 '22 19:10

neilH