Use context: kubectl config use-context k8s-c3-CCC
Create a Static Pod
named my-static-pod
in Namespace default
on cluster3-controlplane1
. It should be of image nginx:1.16-alpine
and have resource requests for 10m
CPU and 20Mi
memory.
Then create a NodePort Service named static-pod-service
which exposes that static Pod on port 80 and check if it has Endpoints and if it’s reachable through the cluster3-controlplane1
internal IP address. You can connect to the internal node IPs from your main terminal.
Creating and Exposing a Static Pod in Kubernetes
In Kubernetes, static Pods are managed directly by the kubelet on a node, rather than the control plane. This makes them ideal for essential system components or services that need to run outside of Kubernetes’ usual orchestration process. In this guide, we’ll walk through the steps to create a static Pod, add resource requests, and expose it as a service.
Step 1: Creating the Static Pod
To create a static Pod, we first need to generate a Pod manifest and place it in the directory that the kubelet watches for static Pods. In most Kubernetes setups, this directory is /etc/kubernetes/manifests/
. Start by SSHing into your control plane node:
1 2 3 |
ssh cluster3-controlplane1 |
Navigate to the /etc/kubernetes/manifests/
directory:
1 2 3 |
cd /etc/kubernetes/manifests/ |
Create the Pod manifest using the following command:
1 2 3 4 5 |
kubectl run my-static-pod \ --image=nginx:1.16-alpine \ -o yaml --dry-run=client > my-static-pod.yaml |
This command generates a basic Pod YAML manifest named my-static-pod.yaml
for a Pod running the nginx:1.16-alpine
image.
Step 2: Adding Resource Requests
Next, we’ll edit the generated YAML file to include resource requests for CPU and memory. Open the file in your preferred text editor:
1 2 3 |
vim my-static-pod.yaml |
Modify the file to include the following resource requests:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
# /etc/kubernetes/manifests/my-static-pod.yaml apiVersion: v1 kind: Pod metadata: name: my-static-pod labels: run: my-static-pod spec: containers: - name: my-static-pod image: nginx:1.16-alpine resources: requests: cpu: 10m memory: 20Mi dnsPolicy: ClusterFirst restartPolicy: Always |
Save and close the file. The kubelet will automatically detect this file and create the Pod.
Step 3: Verifying the Static Pod
To verify that the static Pod is running, use the following command:
1 2 3 |
kubectl get pod -A | grep my-static |
Example output:
1 2 3 4 5 6 |
NAMESPACE NAME READY STATUS AGE default my-static-pod-cluster3-controlplane1 1/1 Running 22s |
The Pod should now be up and running, managed directly by the kubelet.
Step 4: Exposing the Static Pod as a Service
To allow access to the static Pod, we need to expose it as a service. Run the following command to create a NodePort service:
1 2 3 4 5 6 |
kubectl expose pod my-static-pod-cluster3-controlplane1 \ --name static-pod-service \ --type=NodePort \ --port 80 |
This will create a service that exposes the static Pod on a specific port. The generated service YAML will look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
# kubectl expose pod my-static-pod-cluster3-controlplane1 --name static-pod-service --type=NodePort --port 80 apiVersion: v1 kind: Service metadata: name: static-pod-service labels: run: my-static-pod spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: run: my-static-pod type: NodePort status: loadBalancer: {} |
Verify the service and its associated endpoints:
1 2 3 |
kubectl get svc,ep -l run=my-static-pod |
You should see output similar to the following:
1 2 3 4 5 6 7 8 9 |
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/static-pod-service NodePort 10.99.168.252 <none> 80:30352/TCP 30s NAME ENDPOINTS AGE endpoints/static-pod-service 10.32.0.4:80 30s |
The static Pod is now accessible via the service.
Conclusion
In this guide, we demonstrated how to create a static Pod in Kubernetes, add resource requests, and expose the Pod as a service. Static Pods are useful for running essential components on specific nodes, and understanding how to manage them is crucial for maintaining a robust Kubernetes environment.