Kubernetes Workshop Experience Sharing

HungWei Chiu
7 min readJun 16, 2024

· Introduction
· Environment
· Problem Introduction
Node Not Ready
Pod’s Information Shows
Not Able to Access Server Via Service
StatefulSet Crash
No ConfigMap Auto Update
HPA Doesn’t Work
Two Pods Share the Same ReadWriteOnce PVC
Prometheus Scrapes No Data
P99 Latency Spikes
· Conclusion

Introduction

This year, I participated in the Taiwan SRE Summit 2024, where I conducted a nearly two-hour workshop focused on Kubernetes. Unlike the typical hands-on tutorials for beginners, this workshop adopted a troubleshooting approach.

Each developer was provided with a pre-configured Kubernetes Cluster and was required to solve problems within two hours, while also sharing their problem-solving strategies with others.

Environment

The workshop saw an unexpectedly high number of participants. We had to deploy additional environments to ensure each participant had a Kubernetes cluster. Here are the details of the environment:

  1. A total of 50 Kubernetes Clusters (version 1.28)
  2. Each K8s cluster had three nodes (2C4G per node)
  3. 45 clusters were set up on on-premise machines based on OpenStack
  4. 5 clusters were provided by AKS
  5. Each participant was given access via SSH Key to connect to all Kubernetes nodes
  6. The GitHub repository hosted the YAML files needed for each problem
  7. Each Kubernetes setup included Prometheus and Grafana, installed via Prometheus Kubestack, for monitoring and analysis

The cluster architecture is as follows: each node had both private and public IPs. Participants could access Prometheus and Grafana through the public IPs, and nodes could communicate with each other using private IPs.

Note: The AKS environment’s control plane is managed by AKS, so only the self-built OpenStack environment required master nodes.

The workshop flow was as follows:

  1. Distribute the environment to all participants and ensure access without issues.
  2. The host introduced content in units of two problems, asking participants to deploy the scenarios and troubleshoot issues
  3. Participants who resolved issues were asked to share their problem-solving strategies and perspectives

Problem Introduction

Here are the main themes of the ten issues presented, with a detailed introduction for each problem:

  1. Application Crash
  2. Node Not Ready
  3. Pod’s information shows <Invalid Ago>
  4. Not Able to Access Server Via Service
  5. Statefulset crash
  6. No Configmap auto-update
  7. HAP doesn’t work
  8. Two Pods share the same ReadWriteOnce PVC
  9. Prometheus scraps no data
  10. P99 latency spikes

Application Crash

Question:

Deploy an application. After approximately 70 seconds, the application enters the CrashLoopBackOff stage. Identify and resolve the issue to prevent it from crashing repeatedly.

Root Cause:

The Pod’s memory and CPU limits were set too low. The memory limit being reached triggered an OOM (Out of Memory) error, causing the Pod to be removed. Additionally, the CPU limit being reached led to CPU throttling, causing the Liveness Probe to fail frequently, ultimately resulting in the Pod being restarted.

Node Not Ready

Question:

After running kubectl get nodes, some nodes are displayed as NotReady, as shown below:

Identify and resolve the cause of the NotReady status.

Root Cause:

The kubelet on the node couldn’t run properly to report its status because swap was accidentally enabled on the node, causing the kubelet to fail. Disabling swap and restarting the kubelet resolved the issue and brought the node back to a Ready state.

Pod’s Information Shows <Invalid Ago>

Question:

Deploy an application. The application crashes, but the Restart time displayed is incorrect, as shown below:

Explain why it shows <Invalid Ago> and fix the issue.

Root Cause:

The timestamp is calculated by the kubelet on the node. Check the node’s timezone and time synchronization. The issue was caused by the ntp client not starting correctly on the node, leading to time discrepancies. As a result, the reported time difference was too large, causing the <Invalid Ago> message.

Not Able to Access Server Via Service

Question:

Deploy two applications that communicate via a Kubernetes Service. The client logs show that it cannot access the server.

Root Cause:

This is a basic YAML configuration issue. Key points to check in the Service configuration include:

  1. The selector labels
  2. The ports section, specifically port and targetPort

Ensure that the client is not mistakenly using targetPort when accessing the service.

StatefulSet Crash

Question:

A StatefulSet application keeps crashing. How can you fix this issue without losing data?

Root Cause:

The Pod logs indicate that the PVC (Persistent Volume Claim) is full. To resolve this, participants need to dynamically resize the PVC. First, update the PVC size, then delete the StatefulSet Pod to restart it with the increased storage.

If you modify the StatefulSet YAML’s PVC size and use kubectl apply -f, you will get an error because PVC size is immutable. To address this, delete the StatefulSet with the --cascade=orphan option

kubectl delete sts --cascade=orphan <statefulset-name>

This removes the record from etcd without deleting the running Pods, allowing the new YAML to be applied.

No ConfigMap Auto Update

Question:

Update the contents of a ConfigMap and reapply it, but the Pod does not reflect the changes even after a few minutes.

Root Cause:

ConfigMap updates have two stages: from the API Server to the node kubelet and then to the container. However, any ConfigMap using subPath cannot be automatically updated and monitored by iNotify. The only solution is to delete the Pod, causing it to restart and pick up the updated ConfigMap.

HPA Doesn’t Work

Question:

Deploy an application and configure HPA (Horizontal Pod Autoscaler). Even though Prometheus shows increased CPU usage, the number of replicas doesn’t increase.

Root Cause:

HPA and Prometheus are independent. HPA relies on the Metrics API, typically provided by the Metrics Server. Without the Metrics Server installed, HPA cannot function. Check with kubectl top to see if the metrics API is available. If not, HPA won't work.

Two Pods Share the Same ReadWriteOnce PVC

Question:

Deploy a ReadWriteOnce PVC, and two Deployment objects specify this PVC. The result is as shown below:

Why can so many Pods use the PVC simultaneously, despite it being ReadWriteOnce?

Root Cause:

The three familiar deployment models are node-based, allowing multiple Pods on the same node to access the PVC. In the example above, Pods stuck in ContainerCreating are waiting because their nodes do not have access to the PVC.

To ensure only one Pod can use the PVC at a time, use the ReadWriteOncePod access mode. For more details, refer to the design document: https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2485-read-write-once-pod-pv-access-mode/README.md

With this mode, all other Pods will enter a Pending state, as shown below:

Prometheus Scrapes No Data

Question:

Deploy an application with a Prometheus Endpoint and use ServiceMonitor to notify Prometheus. However, Prometheus doesn’t scrape any data, even though manual connections to the service work.

Root Cause:

The deployed ServiceMonitor object wasn’t properly read by Prometheus. Prometheus uses the ServiceMonitorSelector to define the ServiceMonitor rules. Ensure that the labels are correctly set; otherwise, the ServiceMonitor will not be effective.

P99 Latency Spikes

Question:

After deploying an application, Grafana shows high P99 latency spikes, as illustrated below:

Root Cause:

The server’s CPU limit is too low, causing CPU throttling. When there are too many connections, some cannot be processed within each CFS period. This issue can be observed through Prometheus monitoring for CPU throttling.

The corrected result should show more consistent latency, as below:

Conclusion

  1. Two hours was insufficient, and there were minor issues with environmental access. Testing environments on macOS/Linux did not account for Windows users, causing SSH private key format issues.
  2. Participants had varying skill levels, leading to differences in problem-solving times and making it difficult for everyone to progress together.
  3. Problem design focused on native Kubernetes to avoid the vast ecosystem complexity, but finding suitable problems is increasingly challenging.
  4. CPU throttling issues are tricky to design due to their dependence on CPU and environment. In problem one, some participants did not encounter any issues. Future designs should address this.
  5. Overall, the workshop was well-received, with positive feedback highlighting the value of hands-on problem-solving and technical discussion. Participants expressed interest in similar future events.

--

--