Container Orchestration (Kubernetes basics)
Automate deployment, scaling, and management of containerized applications using Kubernetes.
Description
Kubernetes (K8s) is a container orchestration platform that automates the deployment, scaling, networking, and lifecycle management of containerized applications across clusters of machines. It abstracts away individual servers and presents a cluster as a single computational surface, using declarative configuration (YAML manifests) to define the desired state of applications. The Kubernetes control plane continuously reconciles actual state with desired state.
The core abstractions include Pods (the smallest deployable unit, typically one container), Deployments (managing ReplicaSets for rolling updates and rollbacks), Services (stable network endpoints with load balancing across Pods), ConfigMaps and Secrets (externalized configuration), and Ingress resources (HTTP routing and TLS termination). Namespaces provide logical isolation within a cluster, and resource quotas enforce CPU and memory limits per namespace.
For most teams, managed Kubernetes services (EKS, GKE, AKS) eliminate the complexity of running the control plane. Key operational concerns include defining proper resource requests and limits, configuring liveness and readiness probes, setting Pod Disruption Budgets for safe node maintenance, and using Horizontal Pod Autoscalers to scale based on CPU, memory, or custom metrics. Helm charts or Kustomize overlays are commonly used to template and manage Kubernetes manifests across environments.
Prompt Snippet
Define Kubernetes manifests for the application including a Deployment with 3 replicas, resource requests (256Mi memory, 250m CPU) and limits (512Mi memory, 500m CPU), liveness and readiness probes hitting /healthz with initialDelaySeconds of 15, and a rolling update strategy with maxSurge=1 and maxUnavailable=0. Create a ClusterIP Service, an Ingress with TLS termination via cert-manager, and externalize config via ConfigMap and Secrets. Use Kustomize overlays for staging vs production variance.
Tags
Related Terms
Docker Containerization
Package applications and their dependencies into isolated, portable containers using Docker.
Load Balancing
Distribute incoming network traffic across multiple server instances to ensure reliability and optimal resource utilization.
Auto-Scaling
Automatically adjust the number of running application instances based on real-time demand metrics.
Zero-Downtime Deployments
Deploy application updates without any period of unavailability by gradually replacing old instances with new ones.
Health Check Endpoints
Expose HTTP endpoints that report application health status for use by load balancers, orchestrators, and monitoring systems.
Infrastructure as Code (Terraform basics)
Define and provision cloud infrastructure using declarative configuration files that are version-controlled and peer-reviewed.