Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes Persistent Volume and hostpath

I was experimenting with something with Kubernetes Persistent Volumes, I can't find a clear explanation in Kubernetes documentation and the behaviour is not the one I am expecting so I like to ask here.

I configured following Persistent Volume and Persistent Volume Claim.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: store-persistent-volume
  namespace: test
spec:
  storageClassName: hostpath
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: "/Volumes/Data/data"

---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: store-persistent-volume-claim
  namespace: test
spec:
  storageClassName: hostpath
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

and the following Deployment and Service configuration.

kind: Deployment
apiVersion: apps/v1beta2
metadata:
  name: store-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: store
  template:
    metadata:
      labels:
        k8s-app: store
    spec:
      volumes:
      - name: store-volume
        persistentVolumeClaim:
          claimName: store-persistent-volume-claim
      containers:
      - name: store
        image: localhost:5000/store
        ports:
        - containerPort: 8383
          protocol: TCP
        volumeMounts:
        - name: store-volume
          mountPath: /data

---
#------------ Service ----------------#

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: store
  name: store
  namespace: test
spec:
  type: LoadBalancer
  ports:
  - port: 8383
    targetPort: 8383
  selector:
    k8s-app: store

As you can see I defined '/Volumes/Data/data' as Persistent Volume and expecting that to mount that to '/data' container.

So I am assuming whatever in '/Volumes/Data/data' in the host should be visible at '/data' directory at container. Is this assumption correct? Because this is definitely not happening at the moment.

My second assumption is, whatever I save at '/data' should be visible at host, which is also not happening.

I can see from Kubernetes console that everything started correctly, (Persistent Volume, Claim, Deployment, Pod, Service...)

Am I understanding the persistent volume concept correctly at all?

PS. I am trying this in a Mac with Docker (18.05.0-ce-mac67(25042) -Channel edge), may be it should not work at Mac?

Thx for answers

like image 570
posthumecaver Avatar asked Jul 11 '18 13:07

posthumecaver


People also ask

What is hostPath in Kubernetes?

A Kubernetes hostpath is one of the volumes supported by Kubernetes. It is used to mount a file or directory from the host node's file system into our pod. It does not require most pods. However, it is instrumental in testing or development scenarios and provides a powerful escape for some applications.

What is persistent volume in Kubernetes?

A persistent volume is a piece of storage in a cluster that an administrator has provisioned. It is a resource in the cluster, just as a node is a cluster resource. A persistent volume is a volume plug-in that has a lifecycle independent of any individual pod that uses the persistent volume.

What is the difference between persistent volume and persistent volume claim?

PVCs are requests for those resources and also act as claim checks to the resource. So a persistent volume (PV) is the "physical" volume on the host machine that stores your persistent data. A persistent volume claim (PVC) is a request for the platform to create a PV for you, and you attach PVs to your pods via a PVC.

What is PV and PVC in Kubernetes?

PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. A PersistentVolumeClaim (PVC) is a request for storage by a user.


1 Answers

Assuming you are using multi-node Kubernetes cluster, you should be able to see the data mounted locally at /Volumes/Data/data on the specific worker node that pod is running

You can check on which worker your pod is scheduled by using the command kubectl get pods -o wide -n test

Please note, as per kubernetes docs, HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) PersistentVolume

It does work in my case.

like image 132
Learner Avatar answered Oct 06 '22 08:10

Learner