Homechevron_rightBlogchevron_rightDevOps
DevOpsschedule11 min read10 February 2025

Docker to Kubernetes: A Practical Path for Backend Engineers

From a single Dockerfile to a Kubernetes deployment with health checks, rolling updates, and autoscaling — the concepts that actually matter for shipping backend services.

DockerKubernetesDevOpsCI/CDCloud
smart_toy

AI-Assisted Content. This article was generated with AI and reviewed for accuracy based on real engineering experience. Code examples are tested and production-relevant.

Introduction

Most backend engineers know Docker. Far fewer feel confident with Kubernetes. This guide bridges the gap — not by covering every K8s concept, but by walking through the specific path of deploying a real Node.js service.


Stage 1: A Production Dockerfile

# Multi-stage build — keeps image lean
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

FROM node:20-alpine AS runtime
WORKDIR /app
# Non-root user for security
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
COPY --from=builder /app/node_modules ./node_modules
COPY --chown=appuser:appgroup . .
USER appuser
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=5s --retries=3   CMD wget -qO- http://localhost:3000/health || exit 1
CMD ["node", "dist/main.js"]

Key points:

  • Multi-stage keeps final image small (~150MB vs ~800MB)
  • Non-root user reduces attack surface
  • HEALTHCHECK lets Docker know if the container is actually serving

Stage 2: Kubernetes Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: notification-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: notification-service
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: notification-service
    spec:
      containers:
        - name: notification-service
          image: myregistry/notification-service:v1.2.0
          ports:
            - containerPort: 3000
          envFrom:
            - secretRef:
                name: notification-service-secrets
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "512Mi"
              cpu: "500m"
          readinessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 10
            periodSeconds: 5
          livenessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 30
            periodSeconds: 15

Stage 3: Service + Ingress

apiVersion: v1
kind: Service
metadata:
  name: notification-service
spec:
  selector:
    app: notification-service
  ports:
    - port: 80
      targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: notification-ingress
  annotations:
    nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
  rules:
    - host: notifications.myapp.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: notification-service
                port:
                  number: 80

Stage 4: Horizontal Pod Autoscaler

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: notification-service-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: notification-service
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70

Key Principles

  1. Always set resource requests and limits — without them, a noisy neighbour can starve your pods
  2. Readiness ≠ Liveness — readiness gates traffic, liveness restarts containers
  3. Never use latest tag in production — pin to a specific digest for reproducibility
  4. Secrets go in K8s Secrets or an external vault — never bake credentials into images
  5. Rolling updates + readiness probes = zero-downtime deployments by default

The K8s learning curve is real but front-loaded — once the mental model clicks, the operational confidence it gives you is worth every YAML file.