What is the main difference between hostpath and local persistent volume in Kubernetes? Assuming I have a kubernetes cluster running on my machine with a pod running a database that uses a local persistent volume to save data, if the whole cluster fail (for example shutting down the machine), at the next start of the machine (and cluster) there would no longer be a trace of the data previously saved by the pod in the persistent volume, is that correct?
Statically provisioning hostPath volumes Used to bind persistent volume claim requests to this persistent volume. The volume can be mounted as read-write by a single node. The configuration file specifies that the volume is at /mnt/data on the cluster's node.
PVCs are requests for those resources and also act as claim checks to the resource. So a persistent volume (PV) is the "physical" volume on the host machine that stores your persistent data. A persistent volume claim (PVC) is a request for the platform to create a PV for you, and you attach PVs to your pods via a PVC.
22, Kubernetes offered three access modes for PVs and PVCs: ReadWriteOnce – the volume can be mounted as read-write by a single node. ReadOnlyMany – the volume can be mounted read-only by many nodes. ReadWriteMany – the volume can be mounted as read-write by many nodes.
The Difference Between PVs and PVCs in Kubernetes PVs are created by the cluster administrator or dynamically by Kubernetes, whereas users/developers create PVCs. PVs are cluster resources provisioned by an administrator, whereas PVCs are a user's request for storage and resources.
Kubernetes local persistent volume they work well in clustered Kubernetes environments without the need to explicitly bind a POD to a certain node. However, the POD is bound to the node implicitly by referencing a persistent volume claim that is pointing to the local persistent volume.
With HostPath volumes, a pod referencing a HostPath volume may be moved by the scheduler to a different node resulting in data loss. But with Local Persistent Volumes, the Kubernetes scheduler ensures that a pod using a Local Persistent Volume is always scheduled to the same node.
Please note, as per kubernetes docs, HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) PersistentVolume It does work in my case. Show activity on this post.
For the default minimum cluster size (default scale, no high availability) you must create at least 24 persistent volumes. For an example of how to create persistent volumes by using a local provisioner, see Create a Kubernetes cluster using Kubeadm.
A hostPath
volume mounts a file or directory from the host node's filesystem into your Pod. So, if you have a multi-node cluster, the pod is restarted for some reasons and assigned to another node, the new node won't have the old data on the same path. That's why we have seen, that hostPath volumes work well only on single-node clusters.
Here, the Kubernetes local persistent volumes
help us to overcome the restriction and we can work in a multi-node environment with no problems. It remembers which node was used for provisioning the volume, thus making sure that a restarting POD always will find the data storage in the state it had left it before the reboot.
Once a node has died, the data of both hostpath
and local persitent
volumes of that node are lost.
Ref:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With