I am running my docker containers with the help of kubernetes cluster on AWS EKS. Two of my docker containers are using shared volume and both of these containers are running inside two different pods. So I want a common volume which can be used by both the pods on aws.
I created an EFS volume and mounted. I am following link to create PersistentVolumeClaim
. But I am getting timeout error when efs-provider
pod trying to attach mounted EFS volume space. VolumeId
, region are correct only.
Detailed Error message for Pod describe:
timeout expired waiting for volumes to attach or mount for pod "default"/"efs-provisioner-55dcf9f58d-r547q". list of unmounted volumes=[pv-volume]. list of unattached volumes=[pv-volume default-token-lccdw]
MountVolume.SetUp failed for volume "pv-volume" : mount failed: exit status 32
AWS EFS uses NFS type volume plugin, and As per Kubernetes Storage Classes NFS volume plugin does not come with internal Provisioner like EBS.
So the steps will be:
Use volume claim in Deployment.
In the configmap section change the file.system.id: and aws.region: to match the details of the EFS you created.
In the deployment section change the server: to the DNS endpoint of the EFS you created.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-provisioner
data:
file.system.id: yourEFSsystemid
aws.region: regionyourEFSisin
provisioner.name: example.com/aws-efs
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: efs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: aws.region
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: yourEFSsystemID.efs.yourEFSregion.amazonaws.com
path: /
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: example.com/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
For more explanation and details go to https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs
The problem for me was that I was specifying a different path in my PV than /
. And the directory on the NFS server that was referenced beyond that path did not yet exist. I had to manually create that directory first.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With