Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Copy PVC files locally without a dedicated pod

We have a PVC that is written to by many k8s cronjobs. We'd like to periodically copy this data locally. Ordinarily one would use kubectl cp to do such tasks, but since there's no actively running pod with the PVC mounted, this is not possible.

We've been using a modified version of this gist script kubectl-run-with-pvc.sh to create a temporary pod (running sleep 300) and then kubectl cp from this temporary pod to get the PVC data. This "works" but seems kludgey.

Is there a more elegant way to achieve this?

like image 658
colm.anseo Avatar asked Dec 21 '25 09:12

colm.anseo


1 Answers

May I propose to use NFS instead of PVC?

If you do not have an NFS server, you can run one inside k8s cluster using this image @ https://hub.docker.com/r/itsthenetwork/nfs-server-alpine. The in-cluster NFS server indeed uses PVC for its storage but your pods should mount using NFS instead.

Meaning, from pod --> PVC to pod --> NFS --> PVC.

Here is the script that I quite often use to created dedicated in-cluster NFS servers (just modify the variables at the top of the script accordingly):

export NFS_NAME="nfs-share"
export NFS_SIZE="10Gi"
export NFS_SERVER_IMAGE="itsthenetwork/nfs-server-alpine:latest"
export STORAGE_CLASS="thin-disk"

kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: ${NFS_NAME}
  labels:
    app.kubernetes.io/name: nfs-server
    app.kubernetes.io/instance: ${NFS_NAME}
spec:
  ports:
  - name: tcp-2049
    port: 2049
    protocol: TCP
  - name: udp-111
    port: 111
    protocol: UDP
  selector:
    app.kubernetes.io/name: nfs-server
    app.kubernetes.io/instance: ${NFS_NAME}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app.kubernetes.io/name: nfs-server
    app.kubernetes.io/instance: ${NFS_NAME}
  name: ${NFS_NAME}
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: ${NFS_SIZE}
  storageClassName: ${STORAGE_CLASS}
  volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ${NFS_NAME}
  labels:
    app.kubernetes.io/name: nfs-server
    app.kubernetes.io/instance: ${NFS_NAME}
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: nfs-server
      app.kubernetes.io/instance: ${NFS_NAME}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: nfs-server
        app.kubernetes.io/instance: ${NFS_NAME}
    spec:
      containers:
      - name: nfs-server
        image: ${NFS_SERVER_IMAGE}
        ports:
        - containerPort: 2049
          name: tcp
        - containerPort: 111
          name: udp
        securityContext:
          privileged: true
        env:
        - name: SHARED_DIRECTORY
          value: /nfsshare
        volumeMounts:
        - name: pvc
          mountPath: /nfsshare
      volumes:
      - name: pvc
        persistentVolumeClaim:
          claimName: ${NFS_NAME}
EOF

To mount the NFS inside your pod, first you need to get its service IP:

export NFS_NAME="nfs-share"
export NFS_IP=$(kubectl get --template={{.spec.clusterIP}} service/$NFS_NAME)

Then:

  containers:
    - name: apache
      image: apache
      volumeMounts:
        - mountPath: /var/www/html/
          name: nfs-vol
          subPath: html
  volumes:
    - name: nfs-vol
      nfs: 
        server: $NFS_IP
        path: /

This way, not only you have a permanently running pod (which is the NFS server pod) for you to do the kubectl cp, you also have the opportunity to mount the same NFS volume to multiple pods concurrently since NFS does not have the single-mount restriction that most PVC drivers have.

N.B: I have been using this in-cluster NFS server technique for almost 5 years with no issues supporting production-grade traffic volumes.

like image 100
Lukman Avatar answered Dec 24 '25 04:12

Lukman



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!