Module 1: Environment Setup & Basics

Installing Docker

Docker provides containerization technology that packages applications with their dependencies. Let's start by installing Docker on your system.

macOS Installation

brew install --cask docker

Ubuntu/Debian Installation

curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh sudo usermod -aG docker $USER

Verify Installation

docker --version docker run hello-world

Essential Docker Commands

System Information

docker info docker system df docker version

Image Management

docker images docker pull nginx:latest docker rmi nginx:latest

Container Basics

docker run -d --name webserver -p 8080:80 nginx docker ps docker stop webserver docker rm webserver
Key Concepts:
  • Images are read-only templates for containers
  • Containers are running instances of images
  • Docker Hub is the default registry for images
  • Use -d flag to run containers in detached mode

Module 2: Working with Images

Creating a Dockerfile

A Dockerfile defines the steps to create a Docker image. Let's build a Node.js application image.

Basic Node.js Dockerfile

FROM node:16-alpine WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY . . EXPOSE 3000 CMD ["node", "server.js"]

Building the Image

docker build -t myapp:1.0 . docker build -t myapp:latest . docker images | grep myapp

Multi-stage Build Example

# Build stage FROM node:16-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build # Production stage FROM node:16-alpine WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY --from=builder /app/dist ./dist EXPOSE 3000 CMD ["node", "dist/server.js"]

Image Optimization

Layer Caching Best Practices

# Good - Dependencies change less frequently COPY package*.json ./ RUN npm install COPY . . # Bad - Any code change invalidates npm install cache COPY . . RUN npm install

Reducing Image Size

FROM alpine:3.14 RUN apk add --no-cache python3 py3-pip COPY requirements.txt . RUN pip3 install --no-cache-dir -r requirements.txt COPY app.py . CMD ["python3", "app.py"]
Pro Tips:
  • Use alpine-based images for smaller size
  • Combine RUN commands to reduce layers
  • Clean up package managers cache
  • Use .dockerignore to exclude unnecessary files

Module 3: Container Management

Advanced Container Operations

Port Mapping Strategies

# Single port mapping docker run -p 8080:80 nginx # Multiple ports docker run -p 8080:80 -p 8443:443 nginx # Bind to specific interface docker run -p 127.0.0.1:8080:80 nginx # Random host port docker run -P nginx

Environment Variables

# Single environment variable docker run -e DATABASE_URL=postgres://localhost/mydb postgres # Multiple variables docker run -e NODE_ENV=production \ -e API_KEY=secret123 \ -e PORT=3000 \ myapp # Using env file docker run --env-file .env myapp

Container Resource Limits

# Memory limits docker run -m 512m nginx # CPU limits docker run --cpus="1.5" nginx # Combined limits docker run -m 1g --cpus="2" --memory-swap 2g nginx

Container Debugging

Executing Commands in Containers

# Interactive shell docker exec -it container_name /bin/bash # Run specific command docker exec container_name ls -la /app # As different user docker exec -u www-data container_name whoami

Container Logs and Monitoring

# View logs docker logs container_name # Follow logs docker logs -f container_name # Last 100 lines docker logs --tail 100 container_name # With timestamps docker logs -t container_name # Container stats docker stats container_name

Container Inspection

# Full inspection docker inspect container_name # Specific field docker inspect -f '{{.NetworkSettings.IPAddress}}' container_name # Environment variables docker inspect -f '{{.Config.Env}}' container_name
Common Issues:
  • Port already in use - check with lsof -i :PORT
  • Container exits immediately - check logs and CMD/ENTRYPOINT
  • Permission denied - verify user permissions and file ownership

Module 4: Data Persistence

Docker Volumes

Volumes are the preferred mechanism for persisting data generated and used by Docker containers.

Named Volumes

# Create a named volume docker volume create mydata # Use volume in container docker run -v mydata:/app/data nginx # List volumes docker volume ls # Inspect volume docker volume inspect mydata

Bind Mounts

# Mount current directory docker run -v $(pwd):/app nginx # Read-only mount docker run -v $(pwd)/config:/etc/nginx/conf.d:ro nginx # Development setup with hot reload docker run -v $(pwd)/src:/app/src \ -v /app/node_modules \ -p 3000:3000 \ node:16 npm run dev

Data Backup and Restore

Backing Up Volumes

# Backup using temporary container docker run --rm \ -v mydata:/source \ -v $(pwd):/backup \ alpine tar czf /backup/mydata-backup.tar.gz -C /source .

Restoring Volumes

# Restore from backup docker run --rm \ -v mydata:/target \ -v $(pwd):/backup \ alpine tar xzf /backup/mydata-backup.tar.gz -C /target

Database Backup Example

# PostgreSQL backup docker exec postgres_container \ pg_dump -U postgres dbname > backup.sql # MySQL backup docker exec mysql_container \ mysqldump -u root -p dbname > backup.sql # MongoDB backup docker exec mongo_container \ mongodump --archive=/tmp/backup.archive docker cp mongo_container:/tmp/backup.archive .

Volume Drivers and Advanced Options

Using Different Volume Drivers

# Create volume with specific driver docker volume create --driver local \ --opt type=nfs \ --opt o=addr=192.168.1.100,rw \ --opt device=:/path/to/dir \ nfs_volume
Best Practices:
  • Use named volumes for production data
  • Use bind mounts for development
  • Never store data in container's writable layer
  • Regular backup important volumes
  • Clean up unused volumes with docker volume prune

Module 5: Docker Compose Basics

Introduction to Docker Compose

Docker Compose is a tool for defining and running multi-container Docker applications using a YAML file.

Basic docker-compose.yml

version: '3.8' services: web: image: nginx:alpine ports: - "8080:80" volumes: - ./html:/usr/share/nginx/html restart: unless-stopped db: image: postgres:13 environment: POSTGRES_DB: myapp POSTGRES_USER: admin POSTGRES_PASSWORD: secret volumes: - postgres_data:/var/lib/postgresql/data volumes: postgres_data:

Essential Compose Commands

# Start services docker-compose up -d # View logs docker-compose logs -f # Stop services docker-compose stop # Remove everything docker-compose down # Remove including volumes docker-compose down -v

Real-World Application Stack

Full-Stack Application Example

version: '3.8' services: frontend: build: ./frontend ports: - "3000:3000" environment: - REACT_APP_API_URL=http://localhost:5000 depends_on: - backend backend: build: ./backend ports: - "5000:5000" environment: - DATABASE_URL=postgresql://user:pass@db:5432/myapp - REDIS_URL=redis://cache:6379 depends_on: - db - cache db: image: postgres:13-alpine environment: - POSTGRES_DB=myapp - POSTGRES_USER=user - POSTGRES_PASSWORD=pass volumes: - postgres_data:/var/lib/postgresql/data cache: image: redis:6-alpine command: redis-server --appendonly yes volumes: - redis_data:/data volumes: postgres_data: redis_data:

Development vs Production Configs

# docker-compose.override.yml (for development) version: '3.8' services: backend: build: context: ./backend target: development volumes: - ./backend:/app - /app/node_modules environment: - NODE_ENV=development command: npm run dev

Using Multiple Compose Files

# Development docker-compose up # Production docker-compose -f docker-compose.yml -f docker-compose.prod.yml up # Testing docker-compose -f docker-compose.yml -f docker-compose.test.yml up
Compose Tips:
  • Use depends_on to control startup order
  • Override files are automatically loaded in development
  • Use .env files for environment variables
  • Version 3.8 is recommended for most use cases

Module 6: Advanced Compose

Networking in Docker Compose

Custom Networks

version: '3.8' networks: frontend: driver: bridge backend: driver: bridge internal: true services: web: image: nginx networks: - frontend api: image: node:16 networks: - frontend - backend db: image: postgres networks: - backend

Service Discovery and Aliases

version: '3.8' services: web: image: nginx networks: mynet: aliases: - webserver - nginx-server api: image: node:16 environment: - UPSTREAM_URL=http://webserver networks: - mynet networks: mynet:

Scaling and Load Balancing

Scaling Services

# Scale a service docker-compose up -d --scale api=3 # With nginx load balancer version: '3.8' services: nginx: image: nginx volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro ports: - "80:80" depends_on: - app app: image: myapp expose: - "3000"

Health Checks

version: '3.8' services: api: image: node:16 healthcheck: test: ["CMD", "curl", "-f", "http://localhost:3000/health"] interval: 30s timeout: 10s retries: 3 start_period: 40s db: image: postgres healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] interval: 10s timeout: 5s retries: 5

Advanced Configurations

Using Secrets

version: '3.8' secrets: db_password: file: ./secrets/db_password.txt api_key: file: ./secrets/api_key.txt services: db: image: postgres secrets: - db_password environment: POSTGRES_PASSWORD_FILE: /run/secrets/db_password api: image: myapp secrets: - api_key - db_password

Resource Constraints

version: '3.8' services: api: image: myapp deploy: resources: limits: cpus: '0.5' memory: 512M reservations: cpus: '0.25' memory: 256M restart: on-failure:3
Production Considerations:
  • Always use specific image tags, not 'latest'
  • Implement health checks for all services
  • Use secrets for sensitive data
  • Set resource limits to prevent resource exhaustion
  • Use internal networks for backend services

Module 7: Production Patterns

Security Best Practices

Non-Root User

FROM node:16-alpine # Create app user RUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001 WORKDIR /app # Copy and install dependencies as root COPY package*.json ./ RUN npm ci --only=production # Copy app files and change ownership COPY --chown=nodejs:nodejs . . # Switch to non-root user USER nodejs EXPOSE 3000 CMD ["node", "server.js"]

Security Scanning

# Scan image for vulnerabilities docker scan myapp:latest # Using Trivy docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \ aquasec/trivy image myapp:latest # Build-time scanning docker build --secret id=npm,src=$HOME/.npmrc \ --progress=plain \ -t myapp:secure .

Multi-Stage Build Patterns

Go Application Example

# Build stage FROM golang:1.17-alpine AS builder WORKDIR /app COPY go.* ./ RUN go mod download COPY . . RUN CGO_ENABLED=0 GOOS=linux go build -o main . # Final stage FROM alpine:3.14 RUN apk --no-cache add ca-certificates WORKDIR /root/ COPY --from=builder /app/main . USER nobody CMD ["./main"]

React Application Example

# Build stage FROM node:16-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build # Production stage FROM nginx:alpine COPY --from=builder /app/build /usr/share/nginx/html COPY nginx.conf /etc/nginx/conf.d/default.conf EXPOSE 80 CMD ["nginx", "-g", "daemon off;"]

Monitoring and Logging

Centralized Logging Stack

version: '3.8' services: app: image: myapp logging: driver: "fluentd" options: fluentd-address: localhost:24224 tag: app.logs fluentd: image: fluent/fluentd volumes: - ./fluent.conf:/fluentd/etc/fluent.conf ports: - "24224:24224" elasticsearch: image: elasticsearch:7.14.0 environment: - discovery.type=single-node ports: - "9200:9200"

Prometheus Monitoring

version: '3.8' services: prometheus: image: prom/prometheus volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml - prometheus_data:/prometheus ports: - "9090:9090" grafana: image: grafana/grafana ports: - "3000:3000" volumes: - grafana_data:/var/lib/grafana node_exporter: image: prom/node-exporter ports: - "9100:9100" volumes: prometheus_data: grafana_data:
Production Checklist:
  • ✓ Use multi-stage builds to minimize image size
  • ✓ Run containers as non-root user
  • ✓ Implement health checks
  • ✓ Set up centralized logging
  • ✓ Configure monitoring and alerting
  • ✓ Regular security scanning
  • ✓ Use secrets management
  • ✓ Implement rate limiting and DDoS protection

Module 8: Troubleshooting & Debugging

Common Docker Issues

Container Won't Start

# Check container logs docker logs container_name # Check exit code docker inspect container_name --format='{{.State.ExitCode}}' # Debug with shell docker run -it --entrypoint /bin/sh image_name # Override CMD for debugging docker run -it image_name /bin/bash

Networking Issues

# Test connectivity between containers docker exec container1 ping container2 # Check network configuration docker network inspect bridge # List port mappings docker port container_name # Debug DNS resolution docker exec container_name nslookup service_name # Check iptables rules sudo iptables -L -n -t nat

Storage and Disk Space

# Check disk usage docker system df # Clean up unused resources docker system prune -a # Remove dangling images docker image prune # Remove unused volumes docker volume prune # Complete cleanup (WARNING: removes everything) docker system prune -a --volumes

Performance Debugging

Resource Usage Analysis

# Real-time stats docker stats # Check container processes docker top container_name # Detailed resource usage docker exec container_name cat /proc/meminfo docker exec container_name cat /proc/cpuinfo # Check cgroup limits docker exec container_name cat /sys/fs/cgroup/memory/memory.limit_in_bytes

Slow Build Debugging

# Build with detailed output docker build --progress=plain --no-cache -t myapp . # Analyze build cache docker builder prune docker builder du # Use BuildKit for better caching DOCKER_BUILDKIT=1 docker build -t myapp . # Debug specific build stage docker build --target builder -t myapp-debug .

Advanced Debugging Techniques

Container File System Debugging

# Export container filesystem docker export container_name > container.tar tar -tf container.tar | less # Compare image layers docker history image_name # Inspect filesystem changes docker diff container_name # Copy files from crashed container docker cp container_name:/path/to/file ./debug/

Debug Running Container

# Attach to running container docker attach container_name # Debug with nsenter docker run -it --rm --pid container:container_name \ --cap-add SYS_PTRACE alpine sh # Use debug container docker run -it --rm \ --pid container:container_name \ --network container:container_name \ --cap-add SYS_PTRACE \ nicolaka/netshoot

Docker Daemon Debugging

# Enable debug mode # Edit /etc/docker/daemon.json { "debug": true, "log-level": "debug" } # Restart Docker sudo systemctl restart docker # Check daemon logs journalctl -u docker.service -f # Test daemon connectivity docker version curl --unix-socket /var/run/docker.sock http://localhost/version
Troubleshooting Checklist:
  • Always check logs first: docker logs
  • Verify network connectivity between services
  • Check resource limits and available disk space
  • Ensure proper file permissions
  • Validate environment variables
  • Test with minimal configuration first
  • Use debug images when necessary
Emergency Commands:
  • docker kill $(docker ps -q) - Stop all containers
  • docker rm $(docker ps -a -q) - Remove all containers
  • docker rmi $(docker images -q) - Remove all images
  • systemctl restart docker - Restart Docker daemon