Containers β Docker Deep Reality
Why containers exist, Dockerfiles, layer caching, networking, volumes, Compose, and common failures
Why Containers Exist (The VM Pain)
Before containers, deploying software meant one of:
- Bare metal β config drift, βworks on my machineβ, slow provisioning
- VMs β better isolation, but each VM needs its own OS kernel (GBs of overhead, minutes to boot)
Containers solved this by sharing the host kernel while isolating filesystem, network, and process namespaces. The result: millisecond startup, MB instead of GB images, consistent environments.
VMs:ββββββββββββββββββββββββββββββββββββ App A β App B β App C ββ Libs β Libs β Libs ββ Guest OSβ Guest OSβ Guest OS ββββββββββββββββββββββββββββββββββββ€β Hypervisor ββ Host OS ββββββββββββββββββββββββββββββββββββ
Containers:ββββββββββββββββββββββββββββββββββββ App A β App B β App C ββ Libs β Libs β Libs ββββββββββββββββββββββββββββββββββββ€β Container Runtime ββ Host OS Kernel (shared) ββββββββββββββββββββββββββββββββββββImage vs Container vs Runtime
| Concept | What it is |
|---|---|
| Image | Immutable read-only template β layers of filesystem changes |
| Container | Running (or stopped) instance of an image, has its own writable layer |
| Registry | Storage for images (Docker Hub, ECR, GCR, ghcr.io) |
| Runtime | The software that runs containers (containerd, runc) |
# Imagesdocker images # list local imagesdocker pull nginx:1.25 # pull from registrydocker push myrepo/myapp:1.0 # push to registrydocker rmi nginx:1.25 # remove imagedocker image prune # remove unused images
# Containersdocker ps # running containersdocker ps -a # all containers (including stopped)docker run nginx # create + startdocker start <id> # start stopped containerdocker stop <id> # graceful stop (SIGTERM β SIGKILL after timeout)docker kill <id> # immediate kill (SIGKILL)docker rm <id> # remove stopped containerdocker rm -f <id> # force remove running containerDockerfile
Layer Caching
Each RUN, COPY, ADD instruction creates a new layer. Docker caches layers β if a layer hasnβt changed, it reuses the cache.
# BAD β invalidates cache for dependencies every time code changesFROM node:20-alpineCOPY . .RUN npm install# GOOD β copy package files first, install dependencies (cache hit when only code changes)FROM node:20-alpineWORKDIR /appCOPY package*.json ./ # only changes when deps changeRUN npm ci # cached until package*.json changesCOPY . . # code changes here β only affects layers belowCache invalidation rule: When any layer changes, all layers below it are rebuilt.
ENTRYPOINT vs CMD
# CMD β default command, easily overriddenCMD ["nginx", "-g", "daemon off;"]docker run myimage /bin/sh # overrides CMD
# ENTRYPOINT β fixed command, can't be overridden (only appended to)ENTRYPOINT ["python3", "server.py"]docker run myimage --port 9000 # appends: python3 server.py --port 9000
# Common pattern β ENTRYPOINT for executable, CMD for default argsENTRYPOINT ["python3", "server.py"]CMD ["--port", "8080"]docker run myimage --port 9000 # uses custom portEnvironment Handling
# Baked into image (visible in docker inspect, don't use for secrets)ENV APP_ENV=productionENV PORT=8080
# Build argument (only available during build, not in running container)ARG BUILD_VERSIONRUN echo "Building version $BUILD_VERSION"# Pass env vars at runtimedocker run -e DATABASE_URL=postgres://... myimage
# Load from env filedocker run --env-file .env.production myimage
# Build with ARGdocker build --build-arg BUILD_VERSION=1.2.3 .Never bake secrets into images β theyβre visible in docker inspect and image layers.
Multi-Stage Builds
Reduce final image size by building in one stage, copying only the artifacts to the final stage.
# Stage 1: BuildFROM node:20 AS builderWORKDIR /appCOPY package*.json ./RUN npm ciCOPY . .RUN npm run build # outputs to /app/dist
# Stage 2: Final imageFROM nginx:alpineCOPY --from=builder /app/dist /usr/share/nginx/htmlEXPOSE 80The final image is just nginx + static files β no Node.js, no node_modules, no source code.
# Build only up to a specific stage (for debugging)docker build --target builder -t myapp:debug .Image Size & Attack Surface
Smaller images = faster pulls, smaller attack surface, less to patch.
# Use slim or alpine variantsFROM python:3.12-slim # ~50MB vs 1GB for full pythonFROM node:20-alpine # ~170MB vs 1GB
# Remove package manager caches in the same RUN layerRUN apt-get update && \ apt-get install -y --no-install-recommends curl && \ rm -rf /var/lib/apt/lists/*
# Run as non-root userRUN useradd -m appuserUSER appuser# Analyze image layersdocker history myimagedive myimage # interactive layer explorer (install separately)
# Scan for vulnerabilitiesdocker scout cves myimagetrivy image myimageVolumes vs Bind Mounts
| Volume | Bind Mount | |
|---|---|---|
| Location | Managed by Docker (/var/lib/docker/volumes/) | Anywhere on host |
| Controlled by | Docker | Host OS |
| Use case | Persistent data (DB, uploads) | Development (code hot-reload) |
| Performance | Better on non-Linux | Good |
| Backup | docker volume commands | Standard file backup |
# Named volumedocker run -v mydb:/var/lib/postgresql/data postgres
# Bind mount (development)docker run -v $(pwd)/src:/app/src myimage
# Read-only bind mountdocker run -v $(pwd)/config:/etc/myapp:ro myimage
# Inspect volumedocker volume lsdocker volume inspect mydb
# Backup a volumedocker run --rm -v mydb:/data -v $(pwd):/backup alpine \ tar czf /backup/mydb-backup.tar.gz -C /data .Docker Networking
# List networksdocker network ls
# Default networks:# bridge β default for containers on same host# host β shares host network stack (no isolation)# none β no network
# Create custom network (containers can talk by name)docker network create myapp-network
# Connect container to networkdocker run --network myapp-network --name db postgresdocker run --network myapp-network --name app myapp# "app" container can reach "db" container by hostname "db"
# Inspect networkdocker network inspect myapp-networkDocker Compose Patterns
Compose is for defining multi-container apps as a single unit.
version: '3.8'
services: app: build: . ports: - "8080:8080" environment: DATABASE_URL: postgres://myuser:mypass@db:5432/mydb REDIS_URL: redis://redis:6379 depends_on: db: condition: service_healthy # wait for health check volumes: - ./src:/app/src # dev: hot reload networks: - backend
db: image: postgres:16 environment: POSTGRES_USER: myuser POSTGRES_PASSWORD: mypass POSTGRES_DB: mydb volumes: - pgdata:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U myuser"] interval: 5s timeout: 5s retries: 5 networks: - backend
redis: image: redis:7-alpine networks: - backend
volumes: pgdata:
networks: backend:# Start all servicesdocker compose up -d
# Follow logsdocker compose logs -fdocker compose logs -f app # specific service
# Run a one-off commanddocker compose exec app bashdocker compose run --rm app python manage.py migrate
# Stop and remove containersdocker compose down
# Stop and remove containers + volumes (WARNING: data loss)docker compose down -v
# Rebuild imagesdocker compose builddocker compose up -d --buildCommon Failures
Container Exits Immediately
# Check exit codedocker ps -a | grep myapp# STATUS: Exited (1) 3 minutes ago
# Check logsdocker logs myappdocker logs --tail 50 myapp
# Run interactively to debugdocker run -it --entrypoint /bin/sh myimage
# Override ENTRYPOINT to get a shelldocker run -it --entrypoint /bin/bash myimageCommon causes:
- Application crashed on startup (check logs)
- Missing environment variable β panic/exception
- Entrypoint script exits after running a command (use
execat the end) - PID 1 problem β if entrypoint is a shell script, use
execfor the main process
#!/bin/sh# In entrypoint.sh β always exec the main processexec python3 server.py "$@"Port Not Exposed
# Check what port the container is listening on INSIDEdocker exec myapp ss -tlnp
# Check port mappingdocker port myapp
# Verify the container is publishing the portdocker inspect myapp | grep PortBindings
# Correct: port mapping must be specified at run timedocker run -p 8080:8080 myimage # host:containerEnv Misconfiguration
# Check what env vars are set in running containerdocker exec myapp envdocker exec myapp printenv DATABASE_URL
# Inspect environment from outsidedocker inspect myapp | grep -A10 '"Env"'Dependency Not Ready
# Use health checks in Compose (not just depends_on)# Without health checks, depends_on only waits for container start,# not for the service to be ready
# Alternative: use a wait scriptwhile ! nc -z db 5432; do sleep 1; done# Or use wait-for-it.sh / dockerize