Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I mount and format new google compute disk to be mounted in a GKE pod?

I have created a new disk in Google Compute Engine.

gcloud compute disks create --size=10GB --zone=us-central1-a dane-disk

It says I need to format it. But I have no idea how could I mount/access the disk?

gcloud compute disks list
NAME                                               LOCATION       LOCATION_SCOPE  SIZE_GB  TYPE         STATUS
notowania-disk                                     us-central1-a  zone            10       pd-standard  READY

New disks are unformatted. You must format and mount a disk before it can be used. You can find instructions on how to do this at:

https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting

I tried instruction above but lsblk is not showing the disk at all

Do I need to create a VM and somehow attach it to it in order to use it? My goal was to mount the disk as a persistent GKE volume independent of the VM (last time GKE upgrade caused recreation of VM and data loss)

like image 236
Wojtas.Zet Avatar asked Oct 29 '19 14:10

Wojtas.Zet


People also ask

How do I mount Google Drive in GCP?

Mounting your own Google Drive is fairly easy. Just import the drive tools and run the mount command. You will be asked to authenticate using a token that you create using Google Auth API. After you pasted the token your drive is mounted to the given path.

How do you add a persistent disk to Gke?

Now we have a disk available to be used as PV in GKE. Next step is to, Create a Persistent volume named app-storage from the gke-pv disk. To use the persistent volume with the pod, we will create a persistent volume claim with the same name we use in the PV claimRef , ie app-storage-claim.

Can a persistent disk be attached to multiple Google Compute Engine instances?

You can attach an SSD persistent disk in multi-writer mode to up to two N2 virtual machine (VM) instances simultaneously so that both VMs can read and write to the disk.


1 Answers

Thanks for the clarification of what you are trying to do in the comments.

I have 2 different answers here.


The first is that my testing shows that the Kubernetes GCE PD documentation is exactly right, and the warning about formatting seems like it can be safely ignored.

If you just issue:

gcloud compute disks create --size=10GB --zone=us-central1-a my-test-data-disk

And then use it in a pod:

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: nginx
    name: nginx-container
    volumeMounts:
    - mountPath: /test-pd
      name: test-volume
  volumes:
  - name: test-volume
    # This GCE PD must already exist.
    gcePersistentDisk:
      pdName: my-test-data-disk
      fsType: ext4

It will be formatted when it is mounted. This is likely because the fsType parameter instructs the system how to format the disk. You don't need to do anything with a separate GCE instance. The disk is retained even if you delete the pod or even the entire cluster. It is not reformatted on uses after the first and the data is kept around.

So, the warning message from gcloud is confusing, but can be safely ignored in this case.


Now, in order to dynamically create a persistent volume based on GCE PD that isn't automatically deleted, you will need to create a new StorageClass that sets the Reclaim Policy to Retain, and then create a PersistentVolumeClaim based on that StorageClass. This also keeps basically the entire operation inside of Kubernetes, without needing to do anything with gcloud. Likewise, a similar approach is what you would want to use with a StatefulSet as opposed to a single pod, as described here.

Most of what you are looking to do is described in this GKE documentation about dynamically allocating PVCs as well as the Kubernetes StorageClass documentation. Here's an example:

gce-pd-retain-storageclass.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gce-pd-retained
reclaimPolicy: Retain
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  replication-type: none

The above storage class is basically the same as the 'standard' GKE storage class, except with the reclaimPolicy set to Retain.

pvc-demo.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-demo-disk
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: gce-pd-retained
  resources:
    requests:
      storage: 10Gi

Applying the above will dynamically create a disk that will be retained when you delete the claim.

And finally a demo-pod.yaml that mounts the PVC as a volume (this is really a nonsense example using nginx, but it demonstrates the syntax):

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: nginx
    name: nginx-container
    volumeMounts:
    - mountPath: /test-pd
      name: test-volume
  volumes:
  - name: test-volume
    persistentVolumeClaim:
      claimName: pvc-demo-disk

Now, if you apply these three in order, you'll get a container running using the PersistentVolumeClaim which has automatically created (and formatted) a disk for you. When you delete the pod, the claim keeps the disk around. If you delete the claim the StorageClass still keeps the disk from being deleted.

Note that the PV that is left around after this won't be automatically reused, as the data is still on the disk. See the Kubernetes documentation about what you can do to reclaim it in this case. Really, this mostly says that you shouldn't delete the PVC unless you're ready to do work to move the data off the old volume.

Note that these disks will even continue to exist when the entire GKE cluster is deleted as well (and you will continue to be billed for them until you delete them).

like image 76
robsiemb Avatar answered Oct 19 '22 06:10

robsiemb