Back to Blog
DockerContainersDevOpsInfrastructure

Docker Containerization: A Developer's Complete Guide

Master Docker from basics to production-ready containers. Learn Dockerfile optimization, multi-stage builds, and container best practices.

B
Bootspring Team
Engineering
July 15, 2025
5 min read

Containers have revolutionized how we build, ship, and run applications. Docker makes containerization accessible, but mastering it requires understanding the underlying concepts and best practices. AI can accelerate your Docker journey.

Why Containers?#

Containers solve the "works on my machine" problem by packaging applications with their dependencies:

  • Consistency: Same environment everywhere
  • Isolation: Applications don't interfere with each other
  • Portability: Run anywhere Docker runs
  • Efficiency: Share OS kernel, lighter than VMs

Dockerfile Fundamentals#

Basic Structure#

1# Base image 2FROM node:20-alpine 3 4# Set working directory 5WORKDIR /app 6 7# Copy dependency files 8COPY package*.json ./ 9 10# Install dependencies 11RUN npm ci --only=production 12 13# Copy application code 14COPY . . 15 16# Expose port 17EXPOSE 3000 18 19# Start command 20CMD ["node", "server.js"]

Layer Optimization#

1# ❌ Bad: Invalidates cache on any code change 2FROM node:20-alpine 3COPY . . 4RUN npm install 5 6# ✅ Good: Dependencies cached separately 7FROM node:20-alpine 8WORKDIR /app 9COPY package*.json ./ 10RUN npm ci 11COPY . .

Multi-Stage Builds#

Build and Runtime Separation#

1# Build stage 2FROM node:20-alpine AS builder 3WORKDIR /app 4COPY package*.json ./ 5RUN npm ci 6COPY . . 7RUN npm run build 8 9# Production stage 10FROM node:20-alpine AS production 11WORKDIR /app 12COPY --from=builder /app/dist ./dist 13COPY --from=builder /app/node_modules ./node_modules 14COPY package*.json ./ 15 16USER node 17EXPOSE 3000 18CMD ["node", "dist/server.js"]

Multiple Build Targets#

1# Base stage 2FROM node:20-alpine AS base 3WORKDIR /app 4COPY package*.json ./ 5 6# Development stage 7FROM base AS development 8RUN npm install 9COPY . . 10CMD ["npm", "run", "dev"] 11 12# Test stage 13FROM base AS test 14RUN npm ci 15COPY . . 16CMD ["npm", "test"] 17 18# Production stage 19FROM base AS production 20RUN npm ci --only=production 21COPY . . 22RUN npm run build 23CMD ["node", "dist/server.js"]

Build specific target:

docker build --target development -t myapp:dev . docker build --target production -t myapp:prod .

Docker Compose#

Development Environment#

1# docker-compose.yml 2version: '3.8' 3 4services: 5 app: 6 build: 7 context: . 8 target: development 9 ports: 10 - "3000:3000" 11 volumes: 12 - .:/app 13 - /app/node_modules 14 environment: 15 - NODE_ENV=development 16 - DATABASE_URL=postgres://user:pass@db:5432/myapp 17 depends_on: 18 - db 19 - redis 20 21 db: 22 image: postgres:15-alpine 23 volumes: 24 - postgres_data:/var/lib/postgresql/data 25 environment: 26 - POSTGRES_USER=user 27 - POSTGRES_PASSWORD=pass 28 - POSTGRES_DB=myapp 29 30 redis: 31 image: redis:7-alpine 32 volumes: 33 - redis_data:/data 34 35volumes: 36 postgres_data: 37 redis_data:

Production Compose#

1# docker-compose.prod.yml 2version: '3.8' 3 4services: 5 app: 6 image: myapp:${VERSION:-latest} 7 deploy: 8 replicas: 3 9 resources: 10 limits: 11 cpus: '0.5' 12 memory: 512M 13 environment: 14 - NODE_ENV=production 15 healthcheck: 16 test: ["CMD", "curl", "-f", "http://localhost:3000/health"] 17 interval: 30s 18 timeout: 10s 19 retries: 3

Image Optimization#

Minimize Image Size#

1# ❌ Large image (1GB+) 2FROM node:20 3 4# ✅ Alpine base (100MB) 5FROM node:20-alpine 6 7# ✅ Distroless for production (even smaller) 8FROM gcr.io/distroless/nodejs20-debian11

Remove Unnecessary Files#

1# .dockerignore 2node_modules 3npm-debug.log 4.git 5.gitignore 6.env 7*.md 8tests/ 9coverage/ 10.nyc_output/

Security Scanning#

# Scan for vulnerabilities docker scout cves myapp:latest # Alternative: Trivy trivy image myapp:latest

Container Security#

Run as Non-Root#

1FROM node:20-alpine 2 3# Create non-root user 4RUN addgroup -g 1001 -S nodejs && \ 5 adduser -S nodejs -u 1001 6 7WORKDIR /app 8COPY --chown=nodejs:nodejs . . 9 10# Switch to non-root user 11USER nodejs 12 13CMD ["node", "server.js"]

Read-Only Filesystem#

1services: 2 app: 3 image: myapp:latest 4 read_only: true 5 tmpfs: 6 - /tmp 7 - /app/logs

Secret Management#

1# Docker secrets (Swarm mode) 2services: 3 app: 4 secrets: 5 - db_password 6 7secrets: 8 db_password: 9 external: true

Debugging Containers#

Interactive Shell#

# Running container docker exec -it container_name sh # New container from image docker run -it --rm myapp:latest sh

Container Logs#

1# Follow logs 2docker logs -f container_name 3 4# Last 100 lines 5docker logs --tail 100 container_name 6 7# With timestamps 8docker logs -t container_name

Resource Usage#

# Real-time stats docker stats # Inspect container docker inspect container_name

Health Checks#

Dockerfile Health Check#

HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ CMD curl -f http://localhost:3000/health || exit 1

Application Health Endpoint#

1app.get('/health', async (req, res) => { 2 const health = { 3 status: 'healthy', 4 timestamp: new Date().toISOString(), 5 checks: {} 6 }; 7 8 // Check database 9 try { 10 await db.query('SELECT 1'); 11 health.checks.database = 'healthy'; 12 } catch (e) { 13 health.checks.database = 'unhealthy'; 14 health.status = 'unhealthy'; 15 } 16 17 // Check Redis 18 try { 19 await redis.ping(); 20 health.checks.redis = 'healthy'; 21 } catch (e) { 22 health.checks.redis = 'unhealthy'; 23 health.status = 'unhealthy'; 24 } 25 26 const statusCode = health.status === 'healthy' ? 200 : 503; 27 res.status(statusCode).json(health); 28});

CI/CD Integration#

GitHub Actions#

1name: Build and Push 2 3on: 4 push: 5 branches: [main] 6 7jobs: 8 build: 9 runs-on: ubuntu-latest 10 steps: 11 - uses: actions/checkout@v4 12 13 - name: Set up Docker Buildx 14 uses: docker/setup-buildx-action@v3 15 16 - name: Login to Registry 17 uses: docker/login-action@v3 18 with: 19 registry: ghcr.io 20 username: ${{ github.actor }} 21 password: ${{ secrets.GITHUB_TOKEN }} 22 23 - name: Build and Push 24 uses: docker/build-push-action@v5 25 with: 26 context: . 27 push: true 28 tags: ghcr.io/${{ github.repository }}:${{ github.sha }} 29 cache-from: type=gha 30 cache-to: type=gha,mode=max

Common Patterns#

Entrypoint Scripts#

1#!/bin/sh 2# docker-entrypoint.sh 3 4# Wait for database 5until pg_isready -h db -p 5432; do 6 echo "Waiting for database..." 7 sleep 2 8done 9 10# Run migrations 11npm run migrate 12 13# Start application 14exec "$@"
COPY docker-entrypoint.sh /usr/local/bin/ RUN chmod +x /usr/local/bin/docker-entrypoint.sh ENTRYPOINT ["docker-entrypoint.sh"] CMD ["node", "server.js"]

Conclusion#

Docker containerization is essential for modern development. With proper Dockerfiles, multi-stage builds, and security practices, you can create efficient, secure containers that run consistently everywhere.

AI helps generate optimized Dockerfiles, debug container issues, and implement best practices. Start with simple containers, then add complexity as needed—compose for local development, optimized images for production.

Share this article

Help spread the word about Bootspring