I have a simple deployment with 2 replicas.
I would like that each of the replicas have same storage folder in them (shared application upload folder)
I've been playing with claims and volumes, but haven't got the edge still, so asking for a quick help / example.
apiVersion: apps/v1
kind: Deployment
metadata:
name: 'test-tomcat'
labels:
app: test-tomcat
spec:
selector:
matchLabels:
app: test-tomcat
replicas: 3
template:
metadata:
name: 'test-tomcat'
labels:
app: test-tomcat
spec:
volumes:
- name: 'data'
persistentVolumeClaim:
claimName: claim
containers:
- image: 'tomcat:9-alpine'
volumeMounts:
- name: 'data'
mountPath: '/app/data'
imagePullPolicy: Always
name: 'tomcat'
command: ['bin/catalina.sh', 'jpda', 'run']
kind: PersistentVolume
apiVersion: v1
metadata:
name: volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
For sharing files amongst pods, I recommend mounting a google cloud storage drive to each node in your kubernetes cluster, then setting that up as a volume into each pod that mounts to that mounted directory on the node and not directly to the drive.
A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features.
A ReplicaSet (RS) is a Kubernetes object that ensures there is always a stable set of running pods for a specific workload. The ReplicaSet configuration defines a number of identical pods required, and if a pod is evicted or fails, creates more pods to compensate for the loss.
First of all, you need to decide what type of a Persistent Volume to use. Here are several examples of an on-premise cluster:
HostPath - local Path on a Node. Therefore, if the first Pod is located on Node1 and the second is on Node2, storages will be different. To resolve this problem, you can use one of the following options. Example of a HostPath:
kind: PersistentVolume
apiVersion: v1
metadata:
name: example-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
NFS - PersistentVolume of that type uses Network File System. NFS is a distributed file system protocol that allows you to mount remote directories on your servers. You need to install NFS server before using the NFS in Kubernetes; here is the example How To Set Up an NFS Mount on Ubuntu. Example in Kubernetes:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 3Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
GlusterFS - GlusterFS is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace. As for the NFS, you need to install GlusterFS before using it in Kubernetes; here is the link with instructions, and one more with the sample. Example in Kubernetes:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
annotations:
pv.beta.kubernetes.io/gid: "590"
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: glusterfs-cluster
path: myVol1
readOnly: false
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: Service
metadata:
name: glusterfs-cluster
spec:
ports:
- port: 1
---
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
- addresses:
- ip: 192.168.122.221
ports:
- port: 1
- addresses:
- ip: 192.168.122.222
ports:
- port: 1
- addresses:
- ip: 192.168.122.223
ports:
- port: 1
After creating a PersistentVolume, you need to create a PersistaentVolumeClaim. A PersistaentVolumeClaim is a resource used by Pods to request volumes from the storage. After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim’s requirements. Example:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
And the last step, you need to configure a Pod to use the PersistentVolumeClaim. Here is the example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: 'test-tomcat'
labels:
app: test-tomcat
spec:
selector:
matchLabels:
app: test-tomcat
replicas: 3
template:
metadata:
name: 'test-tomcat'
labels:
app: test-tomcat
spec:
volumes:
- name: 'data'
persistentVolumeClaim:
claimName: example-pv-claim #name of the claim should be the same as defined before
containers:
- image: 'tomcat:9-alpine'
volumeMounts:
- name: 'data'
mountPath: '/app/data'
imagePullPolicy: Always
name: 'tomcat'
command: ['bin/catalina.sh', 'jpda', 'run']
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With