I am trying to mount an NFS volume to my pods but with no success.
I have a server running the nfs mount point, when I try to connect to it from some other running server
sudo mount -t nfs -o proto=tcp,port=2049 10.0.0.4:/export /mnt
works fine
Another thing worth mentioning is when I remove the volume from the deployment and the pod is running. I log into it and i can telnet to 10.0.0.4 with ports 111 and 2049 successfully. so there really doesnt seem to be any communication problems
as well as:
showmount -e 10.0.0.4 Export list for 10.0.0.4: /export/drive 10.0.0.0/16 /export 10.0.0.0/16
So I can assume that there is no network or configuration problems between the server and the client (I am using Amazon and the server that i tested on is in the same security group as the k8s minions)
P.S: The server is a simple ubuntu->50gb disk
Kubernetes v1.3.4
So I start creating my PV
apiVersion: v1 kind: PersistentVolume metadata: name: nfs spec: capacity: storage: 50Gi accessModes: - ReadWriteMany nfs: server: 10.0.0.4 path: "/export"
And my PVC
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nfs-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 50Gi
here is how kubectl describes them:
Name: nfs Labels: <none> Status: Bound Claim: default/nfs-claim Reclaim Policy: Retain Access Modes: RWX Capacity: 50Gi Message: Source: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 10.0.0.4 Path: /export ReadOnly: false No events.
AND
Name: nfs-claim Namespace: default Status: Bound Volume: nfs Labels: <none> Capacity: 0 Access Modes: No events.
pod deployment:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mypod labels: name: mypod spec: replicas: 1 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate template: metadata: name: mypod labels: # Important: these labels need to match the selector above, the api server enforces this constraint name: mypod spec: containers: - name: abcd image: irrelevant to the question ports: - containerPort: 80 env: - name: hello value: world volumeMounts: - mountPath: "/mnt" name: nfs volumes: - name: nfs persistentVolumeClaim: claimName: nfs-claim
When I deploy my POD i get the following:
Volumes: nfs: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: nfs-claim ReadOnly: false default-token-6pd57: Type: Secret (a volume populated by a Secret) SecretName: default-token-6pd57 QoS Tier: BestEffort Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 13m 13m 1 {default-scheduler } Normal Scheduled Successfully assigned xxx-2140451452-hjeki to ip-10-0-0-157.us-west-2.compute.internal 11m 7s 6 {kubelet ip-10-0-0-157.us-west-2.compute.internal} Warning FailedMount Unable to mount volumes for pod "xxx-2140451452-hjeki_default(93ca148d-6475-11e6-9c49-065c8a90faf1)": timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs] 11m 7s 6 {kubelet ip-10-0-0-157.us-west-2.compute.internal} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs]
Tried everything I know, and everything i can think of. What am i missing or doing wrong here?
I tested version 1.3.4 and 1.3.5 of Kubernetes and NFS mount didn't work for me. Later I switched to the 1.2.5 and that version gave me some more detailed info ( kubectl describe pod ...). It turned out that 'nfs-common' is missing in the hyperkube image. After I added nfs-common to all container instances based on hyperkube image on master and worker nodes the NFS share started to work normally (mount was successful). So that's the case here. I tested it in practice and it solved my problem.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With