Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to share storage between Kubernetes pods?

I am evaluating Kubernetes as a platform for our new application. For now, it looks all very exciting! However, I’m running into a problem: I’m hosting my cluster on GCE and I need some mechanism to share storage between two pods - the continous integration server and my application server. What’s the best way for doing this with kubernetes? None of the volume types seems to fit my needs, since GCE disks can’t be shared if one pod needs to write to the disk. NFS would be perfect, but seems to require special build options for the kubernetes cluster?

EDIT: Sharing storage seems to be a problem that I have encountered multiple times now using Kubernetes. There are multiple use cases where I'd just like to have one volume and hook it up to multiple pods (with write access). I can only assume that this would be a common use case, no?

EDIT2: For example, this page describes how to set up an Elasticsearch cluster, but wiring it up with persistent storage is impossible (as described here), which kind of renders it pointless :(

like image 423
Marco Lamina Avatar asked Jul 29 '15 07:07

Marco Lamina


People also ask

How do I share data between pods Kubernetes?

For sharing files amongst pods, I recommend mounting a google cloud storage drive to each node in your kubernetes cluster, then setting that up as a volume into each pod that mounts to that mounted directory on the node and not directly to the drive.

How do you share data between containers in a pod?

In Kubernetes, you can use a shared Kubernetes Volume as a simple and efficient way to share data between containers in a Pod. For most cases, it is sufficient to use a directory on the host that is shared with all containers within a Pod.

Can two pods use same volume?

Sharedv4 volumes are useful when you want multiple PODs to access the same PVC (volume) at the same time. They can use the same volume even if they are running on different hosts.


2 Answers

Firstly, do you really need multiple readers / writers?

From my experience of Kubernetes / micro-service architecture (MSA), the issue is often more related to your design pattern. One of the fundamental design patterns with MSA is the proper encapsulation of services, and this includes the data owned by each service.

In much the same way as OOP, your service should look after the data that is related to its area of concern and should allow access to this data to other services via an interface. This interface could be an API, messages handled directly or via a brokage service, or using protocol buffers and gRPC. Generally, multi-service access to data is an anti-pattern akin to global variables in OOP and most programming languages.

As an example, if you where looking to write logs, you should have a log service which each service can call with the relevant data it needs to log. Writing directly to a shared disk means that you'd need to update every container if you change your log directory structure, or decided to add extra functionality like sending emails on certain types of errors.

In the major percentage of cases, you should be using some form of minimal interface before resorting to using a file system, avoiding the unintended side-effects of Hyrum's law that you are exposed to when using a file system. Without proper interfaces / contracts between your services, you heavily reduce your ability to build maintainable and resilient services.

Ok, your situation is best solved using a file system. There are a number of options...

There are obviously times when a file system that can handle multiple concurrent writers provides a superior solution over a more 'traditional' MSA forms of communication. Kubernetes supports a large number of volume types which can be found here. While this list is quite long, many of these volume types don't support multiple writers (also known as ReadWriteMany in Kubernetes).

Those volume types that do support ReadWriteMany can be found in this table and at the time of writing this is AzureFile, CephFS, Glusterfs, Quobyte, NFS and PortworxVolume.

There are also operators such as the popular rook.io which are powerful and provide some great features, but the learning curve for such systems can be a difficult climb when you just want a simple solution and keep moving forward.

The simplest approach.

In my experience, the best initial option is NFS. This is a great way to learn the basic ideas around ReadWriteMany Kubernetes storage, will serve most use cases and is the easiest to implement. After you've built a working knowledge of multi-service persistence, you can then make more informed decisions to use more feature rich offerings which will often require more work to implement.

The specifics for setting up NFS differ based on how and where your cluster is running and the specifics of your NFS service and I've previously written two articles on how to set up NFS for on-prem clusters and using AWS NFS equivalent EFS on EKS clusters. These two articles give a good contrast for just how different implementations can be given your particular situation.

For a bare minimum example, you will firstly need an NFS service. If you're looking to do a quick test or you have low SLO requirements, following this DO article is a great quick primer for setting up NFS on Ubuntu. If you have an existing NAS which provides NFS and is accessible from your cluster, this will also work as well.

Once you have an NFS service, you can create a persistent volume similar to the following:

--- apiVersion: v1 kind: PersistentVolume metadata:   name: pv-name spec:   capacity:     storage: 1Gi   volumeMode: Filesystem   accessModes:     - ReadWriteMany   nfs:     server: 255.0.255.0 # IP address of your NFS service     path: "/desired/path/in/nfs" 

A caveat here is that your nodes will need binaries installed to use NFS, and I've discussed this more in my on-prem cluster article. This is also the reason you need to use EFS when running on EKS as your nodes don't have the ability to connect to NFS.

Once you have the persistent volume set up, it is a simple case of using it like you would any other volume.

--- apiVersion: v1 kind: PersistentVolumeClaim metadata:   name: pvc-name spec:   accessModes:     - ReadWriteMany   resources:     requests:       storage: 1Gi --- apiVersion: apps/v1 kind: Deployment spec:   template:     spec:       containers:         - name: p-name           volumeMounts:             - mountPath: /data               name: v-name       volumes:         - name: v-name           persistentVolumeClaim:             claimName: pvc-name 
like image 117
Ian Belcher Avatar answered Sep 25 '22 14:09

Ian Belcher


First of all. Kubernetes doesn't have integrated functionality to share storage between hosts. There are several options below. But first how to share storage if you already have some volumes set up.

To share a volume between multiple pods you'd need to create a PVC with access mode ReadWriteMany

kind: PersistentVolumeClaim apiVersion: v1 metadata:     name: my-pvc spec:     accessModes:       - ReadWriteMany     storageClassName: myvolume     resources:         requests:             storage: 1Gi 

After that you can mount it to multiple pods:

apiVersion: v1 kind: Pod metadata:   name: myapp1 spec:   containers: ...       volumeMounts:         - mountPath: /data           name: data           subPath: app1   volumes:     - name: data       persistentVolumeClaim:         claimName: 'my-pvc' --- apiVersion: v1 kind: Pod metadata:   name: myapp2 spec:   containers: ...       volumeMounts:         - mountPath: /data           name: data           subPath: app2   volumes:     - name: data       persistentVolumeClaim:         claimName: 'my-pvc' 

Of course, persistent volume must be accessible via network. Otherwise you'd need to make sure that all the pods are scheduled to the node with that volume.

There are several volume types that are suitable for that and not tied to any cloud provider:

  • NFS
  • RBD (Ceph Block Device)
  • CephFS
  • Glusterfs
  • Portworx Volumes

Of course, to use a volume you need to have it first. That is, if you want to consume NFS you need to setup NFS on all nodes in K8s cluster. If you want to consume Ceph, you need to setup Ceph cluster and so on.

The only volume type that supports Kubernetes out of the box is Portworks. There are instruction on how to set it up in GKE.

To setup Ceph cluster in K8s there's a project in development called Rook.

But this is all overkill if you just want a folder from one node to be available in another node. In this case just setup NFS server. It wouldn't be harder than provisioning other volume types and will consume much less cpu/memory/disk resources.

like image 20
Vanuan Avatar answered Sep 23 '22 14:09

Vanuan