Use context: kubectl config use-context k8s-c1-H
Use Namespace project-tiger
for the following. Create a DaemonSet named ds-important
with image httpd:2.4-alpine
and labels id=ds-important
and uuid=18426a0b-5f59-4e10-923f-c0e078e82462
. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, also controlplanes.
Creating a DaemonSet in Kubernetes from a Deployment
DaemonSets are a critical feature in Kubernetes, allowing you to ensure that a copy of a Pod runs on every node in your cluster. However, if you’re working with a situation where you cannot directly create a DaemonSet using kubectl
, you can start by creating a Deployment and then converting it into a DaemonSet. In this guide, we’ll walk through how to do just that.
Step 1: Create a Deployment
Since we aren’t able to create a DaemonSet directly, we’ll start by creating a Deployment and then modify it. First, generate a Deployment YAML file:
1 2 3 |
kubectl -n project-tiger create deployment --image=httpd:2.4-alpine ds-important --dry-run=client -o yaml > 11.yaml |
This command will create a Deployment YAML file named 11.yaml
. Now, let’s edit this file to convert the Deployment into a DaemonSet.
Step 2: Modify the YAML to Create a DaemonSet
Open the 11.yaml
file in your preferred text editor (e.g., vim
) and make the following changes:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
# 11.yaml apiVersion: apps/v1 kind: DaemonSet # Change from Deployment to DaemonSet metadata: name: ds-important namespace: project-tiger # Ensure this is set to the correct namespace labels: # Add these labels id: ds-important uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 spec: selector: matchLabels: id: ds-important # Add these labels uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 template: metadata: labels: id: ds-important # Add these labels uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 spec: containers: - image: httpd:2.4-alpine name: ds-important resources: requests: # Add resource requests cpu: 10m memory: 10Mi tolerations: # Add toleration to ensure the DaemonSet runs on all nodes - effect: NoSchedule key: node-role.kubernetes.io/control-plane |
In this YAML file, we changed the kind
from Deployment
to DaemonSet
, removed unnecessary fields like replicas
and strategy
, and added resource requests and tolerations to ensure the DaemonSet runs on all nodes, including control-plane nodes.
Step 3: Apply the DaemonSet Configuration
Now that the YAML file is properly configured, apply it to your Kubernetes cluster:
1 2 3 |
kubectl apply -f 11.yaml |
After applying, verify that the DaemonSet is running on all nodes:
1 2 3 4 5 6 |
kubectl -n project-tiger get ds kubectl -n project-tiger get pods -l id=ds-important -o wide |
You should see output similar to the following, confirming that the DaemonSet is running on all nodes:
1 2 3 4 5 6 7 8 9 10 11 |
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ds-important 3 3 3 3 3 <none> 8s NAME READY STATUS NODE ds-important-6pvgm 1/1 Running cluster1-node1 ds-important-lh5ts 1/1 Running cluster1-controlplane1 ds-important-qhjcq 1/1 Running cluster1-node2 |
Conclusion
In this guide, we’ve demonstrated how to create a DaemonSet in Kubernetes by first generating a Deployment and then modifying it. This method is useful when direct creation of a DaemonSet is not possible via kubectl
. By understanding the YAML structure and the required changes, you can easily ensure that your application runs across all nodes in your cluster.