Use context: kubectl config use-context k8s-c2-AC
Write a command into /opt/course/15/cluster_events.sh
which shows the latest events in the whole cluster, ordered by time (metadata.creationTimestamp
). Use kubectl
for it.
Now delete the kube-proxy Pod running on node cluster2-node1 and write the events this caused into /opt/course/15/pod_kill.log
.
Finally kill the containerd container of the kube-proxy Pod on node cluster2-node1
and write the events into /opt/course/15/container_kill.log
.
Do you notice differences in the events both actions caused?
Understanding and Managing Kubernetes Events: Pod and Container Termination
In Kubernetes, events provide critical insights into the state of your cluster and its resources. When Pods or containers are terminated, Kubernetes generates events that help administrators understand what actions were taken to maintain the desired state. In this guide, we’ll explore how to generate and capture events related to Pod and container termination, and analyze the differences in the events generated.
Step 1: Capturing Cluster Events
To start, we’ll capture all events across the cluster, sorted by their creation time. This will give us a chronological view of what’s happening in the cluster:
1 2 3 4 5 |
# /opt/course/15/cluster_events.sh kubectl get events -A --sort-by=.metadata.creationTimestamp |
This script lists all events in the cluster and sorts them by the time they were created, making it easier to trace the sequence of actions.
Step 2: Deleting a kube-proxy Pod
Next, we’ll delete a specific kube-proxy
Pod to observe the events generated by this action. First, identify the Pod running on cluster2-node1
:
1 2 3 |
kubectl -n kube-system get pod -o wide | grep proxy |
Once you’ve identified the Pod, delete it:
1 2 3 |
kubectl -n kube-system delete pod kube-proxy-z64cg |
After deleting the Pod, run the cluster events script to capture the events triggered by this action:
1 2 3 |
sh /opt/course/15/cluster_events.sh |
Step 3: Logging the Events
Write the events caused by the Pod deletion into a log file for further analysis:
1 2 3 4 5 6 7 8 9 10 11 |
# /opt/course/15/pod_kill.log kube-system 9s Normal Killing pod/kube-proxy-jsv7t ... kube-system 3s Normal SuccessfulCreate daemonset/kube-proxy ... kube-system <unknown> Normal Scheduled pod/kube-proxy-m52sx ... default 2s Normal Starting node/cluster2-node1 ... kube-system 2s Normal Created pod/kube-proxy-m52sx ... kube-system 2s Normal Pulled pod/kube-proxy-m52sx ... kube-system 2s Normal Started pod/kube-proxy-m52sx ... |
This log file contains all the events related to the deletion and recreation of the kube-proxy
Pod, including the actions taken by the DaemonSet to recreate the Pod.
Step 4: Killing a Container Inside the Pod
To further analyze event generation, we’ll manually kill the main container inside the kube-proxy
Pod. First, SSH into the node where the Pod is running:
1 2 3 |
ssh cluster2-node1 |
Next, find the container ID for the kube-proxy
container:
1 2 3 |
crictl ps | grep kube-proxy |
Once you have the container ID, remove the container:
1 2 3 |
crictl rm 1e020b43c4423 |
You should see that Kubernetes immediately recreates the container to maintain the Pod’s desired state:
1 2 3 4 5 |
crictl ps | grep kube-proxy 0ae4245707910 36c4ebbc9d979 17 seconds ago Running kube-proxy ... |
Step 5: Logging Container Kill Events
Now, check and log the events caused by killing the container:
1 2 3 4 5 6 7 8 9 |
sh /opt/course/15/cluster_events.sh # /opt/course/15/container_kill.log kube-system 13s Normal Created pod/kube-proxy-m52sx ... kube-system 13s Normal Pulled pod/kube-proxy-m52sx ... kube-system 13s Normal Started pod/kube-proxy-m52sx ... |
These events show that Kubernetes recreated the container within the existing Pod, with fewer actions required compared to deleting the entire Pod.
Analyzing the Differences
Comparing the events generated by deleting a Pod versus killing a container reveals key differences:
- Pod Deletion: Deleting the entire Pod triggers the DaemonSet to recreate the Pod, resulting in more events, such as scheduling, pulling the image, and starting the Pod.
- Container Kill: Killing a container within a Pod causes fewer events since the Pod still exists, and only the container needs to be recreated.
These differences highlight how Kubernetes efficiently manages resources and ensures that the desired state of your applications is maintained.
Conclusion
In this guide, we’ve explored how Kubernetes handles Pod and container termination, and how these actions generate different events. Understanding these events is crucial for troubleshooting and managing your cluster effectively. By capturing and analyzing these events, you can gain deeper insights into how Kubernetes maintains the desired state of your applications.