Kubernetes Assignment– 7
Kubernetes Best Practices
Basic Questions
- Enable the Metrics Server in your cluster for autoscaling.
- Deploy a simple nginx Deployment with 2 replicas.
- Apply a Horizontal Pod Autoscaler (HPA) to scale the nginx Deployment based on CPU usage.
- Generate load on the nginx Deployment and observe HPA scaling.
- Scale down the load and observe HPA scaling back.
- Deploy a Pod with CPU and memory resource requests/limits defined.
- Apply a Vertical Pod Autoscaler (VPA) in “recommendation” mode to the Pod.
- View VPA recommendations using kubectl describe vpa.
- Enable Cluster Autoscaler in Minikube (or cloud-managed cluster).
- Check logs of the Cluster Autoscaler to see scaling activity.
- Document what High Availability (HA) means in Kubernetes.
- Deploy an HA control plane with at least 3 API servers (theoretical on local, practical on cloud).
- Create a backup of cluster manifests using kubectl get all -o yaml > backup.yaml.
- Create a backup of etcd data using etcdctl snapshot save backup.db.
- Restore etcd data from a snapshot.
- Deploy a sample Pod on Amazon EKS and verify it runs.
- Deploy a sample Pod on Azure AKS and verify it runs.
- Deploy a sample Pod on Google GKE and verify it runs.
- List 5 best practices for managing production clusters.
- Write a YAML manifest applying resource requests/limits and labels for production readiness.
Intermediate Questions
- Create an HPA that scales a php-apache Deployment between 2 and 10 replicas.
- Configure the HPA to trigger at 50% CPU utilization.
- Use kubectl top pods to view resource usage and verify scaling.
- Apply VPA in “auto” mode to update Pod resources automatically.
- Run load tests on the VPA-enabled Pod and observe updates.
- Configure Cluster Autoscaler on EKS with minimum 1 and maximum 5 nodes.
- Trigger scaling by deploying a large workload and verify new nodes are added.
- Document the architecture of an HA Kubernetes cluster with multiple control plane nodes.
- Configure a backup job using Velero to back up cluster resources.
- Perform a restore operation using Velero.
- Deploy a StatefulSet for MySQL with a PVC on EKS.
- Back up the MySQL data using kubectl exec and save to a persistent volume.
- Deploy a Deployment on AKS and expose it via LoadBalancer Service.
- Deploy a Deployment on GKE and expose it via Ingress.
- Configure a centralized logging system for production using EFK.
- Create PodDisruptionBudgets (PDBs) for critical workloads.
- Apply anti-affinity rules to spread replicas across nodes.
- Configure liveness and readiness probes for a production Deployment.
- Add RBAC rules to restrict access to production cluster resources.
- Document a CI/CD pipeline that deploys workloads to EKS/AKS/GKE.
Advanced Questions
- Deploy a microservices app with 3 components: frontend, backend, and database.
- Expose the frontend using Ingress with TLS enabled.
- Configure a ConfigMap for backend application settings.
- Create a Secret for database username and password.
- Mount a PersistentVolume for the database.
- Enable HPA for the backend service based on CPU usage.
- Enable VPA for the database Pod in recommendation mode.
- Configure Cluster Autoscaler on your cloud cluster (EKS/AKS/GKE) and verify scale-out.
- Integrate Prometheus and Grafana to monitor the microservices app.
- Deliver a final hands-on project:
- Deploy frontend + backend + DB (with PV, ConfigMap, Secret)
- Expose frontend with Ingress + TLS
- Enable HPA for backend + VPA for DB
- Integrate Prometheus/Grafana monitoring
- Enable Cluster Autoscaler for production scale
- Document best practices followed (security, resource limits, HA setup).