Use context: kubectl config use-context k8s-c1-H
Create a new PersistentVolume named safari-pv
. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data
and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespace project-tiger
named safari-pvc
. It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deployment safari
in Namespace project-tiger
which mounts that volume at /tmp/safari-data
. The Pods of that Deployment should be of image httpd:2.4.41-alpine
.
Setting Up Persistent Storage in Kubernetes with PersistentVolume and PersistentVolumeClaim
In Kubernetes, managing persistent storage is crucial for stateful applications. This guide will walk you through creating a PersistentVolume
(PV) and PersistentVolumeClaim
(PVC), and then using them in a Deployment to ensure your data persists across Pod restarts.
Step 1: Create the PersistentVolume
First, we’ll create a PersistentVolume
by editing a YAML file. Find an example from the official Kubernetes documentation and modify it to suit our needs:
1 2 3 4 5 |
vim 6_pv.yaml |
Edit the file as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
# 6_pv.yaml kind: PersistentVolume apiVersion: v1 metadata: name: safari-pv spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce hostPath: path: "/Volumes/Data" |
Once the file is ready, create the PersistentVolume with the following command:
1 2 3 4 5 |
kubectl apply -f 6_pv.yaml |
Step 2: Create the PersistentVolumeClaim
Next, we’ll create a PersistentVolumeClaim
to bind to our PV. Again, find an example from the official documentation and modify it:
1 2 3 4 5 |
vim 6_pvc.yaml |
Edit the file as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
# 6_pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: safari-pvc namespace: project-tiger spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi |
Create the PVC with the following command:
1 2 3 4 5 |
kubectl apply -f 6_pvc.yaml |
You can check if the PV and PVC are bound correctly by running:
1 2 3 4 5 |
kubectl -n project-tiger get pv,pvc |
You should see output indicating that both the PV and PVC are in the “Bound” state:
1 2 3 4 5 6 7 8 9 |
NAME CAPACITY ... STATUS CLAIM ... persistentvolume/safari-pv 2Gi ... Bound project-tiger/safari-pvc ... NAME STATUS VOLUME CAPACITY ... persistentvolumeclaim/safari-pvc Bound safari-pv 2Gi ... |
Step 3: Create a Deployment and Mount the Volume
Now, we’ll create a Deployment and mount the PVC to a specific path in the container. Start by generating the Deployment YAML:
1 2 3 4 5 |
kubectl -n project-tiger create deploy safari --image=httpd:2.4.41-alpine --dry-run=client -o yaml > 6_dep.yaml |
Create the Deployment with the following command:
1 2 3 4 5 |
kubectl apply -f 6_dep.yaml |
Step 4: Verify the Volume Mount
Finally, confirm that the volume is correctly mounted in the Pod by describing the Pod:
1 2 3 4 5 |
kubectl -n project-tiger describe pod safari-5cbf46d6d-mjhsb | grep -A2 Mounts: |
You should see the mount path listed in the output, indicating that the volume is properly mounted:
1 2 3 4 5 6 7 |
Mounts: /tmp/safari-data from data (rw) # there it is /var/run/secrets/kubernetes.io/serviceaccount from default-token-n2sjj (ro) |
In this guide, we walked through creating a PersistentVolume, binding it to a PersistentVolumeClaim, and mounting it in a Deployment. This ensures that your application has persistent storage that remains available even if the Pod is restarted.