Verifying kube-proxy Functionality in Kubernetes by Using iptables
In Kubernetes, kube-proxy
is responsible for maintaining network rules on nodes, ensuring that communication between services is correctly routed. This post walks through the process of verifying that kube-proxy
is functioning correctly on all nodes by using iptables. We’ll create a test Pod and Service, check the iptables rules on all nodes, and then clean up the resources.
Step 1: Create a Test Pod
First, we need to create a Pod named p2-pod
with two containers. The first container runs an Nginx server, and the second container runs a BusyBox shell that keeps the Pod alive:
1 2 3 4 5 |
kubectl run p2-pod --image=nginx:1.21.3-alpine --dry-run=client -o yaml > p2.yaml |
Edit the p2.yaml
file to include the BusyBox container:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
apiVersion: v1 kind: Pod metadata: name: p2-pod namespace: project-hamster spec: containers: - image: nginx:1.21.3-alpine name: nginx-container - image: busybox:1.31 name: busybox-container command: ["sh", "-c", "sleep 1d"] |
Apply the configuration to create the Pod:
1 2 3 4 5 |
kubectl apply -f p2.yaml |
Step 2: Create a Service
Next, expose the p2-pod
using a Service that forwards traffic from port 3000 to port 80 of the Nginx container:
1 2 3 4 5 |
kubectl -n project-hamster expose pod p2-pod --name p2-service --port 3000 --target-port 80 |
Confirm that the Service and Pod are connected by listing the services and endpoints:
1 2 3 4 5 |
kubectl -n project-hamster get pod,svc,ep |
Step 3: Confirm kube-proxy is Running and Using iptables
Now, log into each node and confirm that kube-proxy
is running and using iptables. Use the crictl
command to inspect the kube-proxy
container logs on each node:
1 2 3 4 5 6 7 8 |
# On each node ssh cluster1-controlplane1 crictl ps | grep kube-proxy crictl logs <container_id> |
Look for the log entry: Using iptables Proxier
. Repeat this process for all nodes (e.g., cluster1-node1
and cluster1-node2
).
Step 4: Check iptables Rules
To verify that the kube-proxy
has correctly configured the iptables rules for the p2-service
, check the iptables rules on each node:
1 2 3 4 5 6 |
# On each node iptables-save | grep p2-service |
Save the iptables rules related to the p2-service
into a file:
1 2 3 4 5 6 7 |
ssh cluster1-controlplane1 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt ssh cluster1-node1 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt ssh cluster1-node2 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt |
Step 5: Clean Up
Finally, delete the p2-service
and verify that the corresponding iptables rules have been removed from all nodes:
1 2 3 4 5 6 7 |
kubectl -n project-hamster delete svc p2-service # On each node iptables-save | grep p2-service |
There should be no iptables rules remaining for p2-service
.
Conclusion
By following these steps, you can verify that kube-proxy
is functioning correctly on all nodes in your Kubernetes cluster. Ensuring that the correct iptables rules are applied is crucial for maintaining proper service communication within the cluster.