Kubernetes Assignment– 7

Kubernetes Best Practices

Basic Questions

  1. Enable the Metrics Server in your cluster for autoscaling.
  2. Deploy a simple nginx Deployment with 2 replicas.
  3. Apply a Horizontal Pod Autoscaler (HPA) to scale the nginx Deployment based on CPU usage.
  4. Generate load on the nginx Deployment and observe HPA scaling.
  5. Scale down the load and observe HPA scaling back.
  6. Deploy a Pod with CPU and memory resource requests/limits defined.
  7. Apply a Vertical Pod Autoscaler (VPA) in “recommendation” mode to the Pod.
  8. View VPA recommendations using kubectl describe vpa.
  9. Enable Cluster Autoscaler in Minikube (or cloud-managed cluster).
  10. Check logs of the Cluster Autoscaler to see scaling activity.
  11. Document what High Availability (HA) means in Kubernetes.
  12. Deploy an HA control plane with at least 3 API servers (theoretical on local, practical on cloud).
  13. Create a backup of cluster manifests using kubectl get all -o yaml > backup.yaml.
  14. Create a backup of etcd data using etcdctl snapshot save backup.db.
  15. Restore etcd data from a snapshot.
  16. Deploy a sample Pod on Amazon EKS and verify it runs.
  17. Deploy a sample Pod on Azure AKS and verify it runs.
  18. Deploy a sample Pod on Google GKE and verify it runs.
  19. List 5 best practices for managing production clusters.
  20. Write a YAML manifest applying resource requests/limits and labels for production readiness.

Intermediate Questions

  1. Create an HPA that scales a php-apache Deployment between 2 and 10 replicas.
  2. Configure the HPA to trigger at 50% CPU utilization.
  3. Use kubectl top pods to view resource usage and verify scaling.
  4. Apply VPA in “auto” mode to update Pod resources automatically.
  5. Run load tests on the VPA-enabled Pod and observe updates.
  6. Configure Cluster Autoscaler on EKS with minimum 1 and maximum 5 nodes.
  7. Trigger scaling by deploying a large workload and verify new nodes are added.
  8. Document the architecture of an HA Kubernetes cluster with multiple control plane nodes.
  9. Configure a backup job using Velero to back up cluster resources.
  10. Perform a restore operation using Velero.
  11. Deploy a StatefulSet for MySQL with a PVC on EKS.
  12. Back up the MySQL data using kubectl exec and save to a persistent volume.
  13. Deploy a Deployment on AKS and expose it via LoadBalancer Service.
  14. Deploy a Deployment on GKE and expose it via Ingress.
  15. Configure a centralized logging system for production using EFK.
  16. Create PodDisruptionBudgets (PDBs) for critical workloads.
  17. Apply anti-affinity rules to spread replicas across nodes.
  18. Configure liveness and readiness probes for a production Deployment.
  19. Add RBAC rules to restrict access to production cluster resources.
  20. Document a CI/CD pipeline that deploys workloads to EKS/AKS/GKE.

Advanced Questions

  1. Deploy a microservices app with 3 components: frontend, backend, and database.
  2. Expose the frontend using Ingress with TLS enabled.
  3. Configure a ConfigMap for backend application settings.
  4. Create a Secret for database username and password.
  5. Mount a PersistentVolume for the database.
  6. Enable HPA for the backend service based on CPU usage.
  7. Enable VPA for the database Pod in recommendation mode.
  8. Configure Cluster Autoscaler on your cloud cluster (EKS/AKS/GKE) and verify scale-out.
  9. Integrate Prometheus and Grafana to monitor the microservices app.
  10. Deliver a final hands-on project:
    • Deploy frontend + backend + DB (with PV, ConfigMap, Secret)
    • Expose frontend with Ingress + TLS
    • Enable HPA for backend + VPA for DB
    • Integrate Prometheus/Grafana monitoring
    • Enable Cluster Autoscaler for production scale
    • Document best practices followed (security, resource limits, HA setup).