DevOps Zero to Hero: Part 3 - Docker Essentials
Introduction
Containerization has revolutionized how we build, ship, and run applications. Docker makes it possible to package applications with all their dependencies, ensuring they run consistently across different environments. In this part, you'll master Docker fundamentals and prepare our web application for cloud deployment.
Understanding Containers
Containers vs Virtual Machines
Virtual Machines:
Run complete OS
Heavy resource usage (GB of memory)
Slower startup (minutes)
Hardware-level virtualization
Strong isolation
Containers:
Share host OS kernel
Lightweight (MB of memory)
Fast startup (seconds)
OS-level virtualization
Process isolation
Why Docker?
Docker solves the "it works on my machine" problem by:
Ensuring consistency across environments
Simplifying dependency management
Enabling microservices architecture
Facilitating CI/CD pipelines
Improving resource utilization
Docker Architecture
Core Components
Docker Engine: Core runtime
Docker Client: CLI tool for interacting with Docker
Docker Registry: Storage for Docker images (Docker Hub)
Docker Objects:
Images: Read-only templates
Containers: Running instances of images
Networks: Communication between containers
Volumes: Persistent data storage
Installing Docker
Docker Compose Commands
# Start services
docker-compose up -d
# View logs
docker-compose logs -f
# Stop services
docker-compose down
# Rebuild and start
docker-compose up -d --build
# Scale services
docker-compose up -d --scale web=3
# View service status
docker-compose ps
# Execute command in service
docker-compose exec web sh
Container Networking
Network Types
Bridge (default): Isolated network for containers
Host: Container uses host's network
None: No networking
Overlay: Multi-host networking (Swarm)
Macvlan: Assign MAC address to container
Working with Networks
# List networks
docker network ls
# Create custom network
docker network create myapp-network
# Run container on specific network
docker run -d --network myapp-network --name app1 nginx
# Connect running container to network
docker network connect myapp-network container_name
# Inspect network
docker network inspect myapp-network
# Remove network
docker network rm myapp-network
Docker Volumes and Data Persistence
Volume Types
Named Volumes: Managed by Docker
Bind Mounts: Map host directory
tmpfs Mounts: Memory only (Linux)
Working with Volumes
# Create named volume
docker volume create app-data
# List volumes
docker volume ls
# Run container with volume
docker run -v app-data:/data nginx
# Bind mount example
docker run -v $(pwd)/data:/data nginx
# Inspect volume
docker volume inspect app-data
# Remove volume
docker volume rm app-data
# Remove all unused volumes
docker volume prune
Update Application for Redis Caching
Update src/app.js
to include Redis caching:
const express = require('express');
const redis = require('redis');
const app = express();
const PORT = process.env.PORT || 3000;
// Redis client setup
const redisClient = redis.createClient({
url: process.env.REDIS_URL || 'redis://redis:6379'
});
redisClient.on('error', (err) => {
console.log('Redis Client Error', err);
});
redisClient.connect().catch(console.error);
// Middleware to count requests
app.use(async (req, res, next) => {
try {
await redisClient.incr('request_count');
} catch (err) {
console.error('Redis error:', err);
}
next();
});
// Health check endpoint
app.get('/health', async (req, res) => {
const health = {
status: 'healthy',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
environment: process.env.NODE_ENV || 'development'
};
try {
await redisClient.ping();
health.redis = 'connected';
} catch (err) {
health.redis = 'disconnected';
}
res.status(200).json(health);
});
// Main endpoint with caching
app.get('/', async (req, res) => {
try {
// Try to get from cache
const cached = await redisClient.get('homepage');
if (cached) {
return res.json(JSON.parse(cached));
}
// Create response
const response = {
message: 'Welcome to DevOps Web App',
version: '1.0.0',
timestamp: new Date().toISOString(),
endpoints: {
health: '/health',
info: '/info',
metrics: '/metrics'
}
};
// Cache for 60 seconds
await redisClient.setEx('homepage', 60, JSON.stringify(response));
res.json(response);
} catch (err) {
console.error('Error:', err);
res.status(500).json({ error: 'Internal server error' });
}
});
// Metrics endpoint
app.get('/metrics', async (req, res) => {
try {
const requestCount = await redisClient.get('request_count');
res.json({
requests_total: parseInt(requestCount) || 0,
memory_usage_bytes: process.memoryUsage().heapUsed,
uptime_seconds: process.uptime()
});
} catch (err) {
res.status(500).json({ error: 'Metrics unavailable' });
}
});
// Graceful shutdown
process.on('SIGTERM', async () => {
console.log('SIGTERM signal received: closing HTTP server');
await redisClient.quit();
process.exit(0);
});
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
module.exports = app;
Docker Registry and Image Management
Docker Hub
# Login to Docker Hub
docker login
# Tag image for push
docker tag devops-web-app:1.0.0 yourusername/devops-web-app:1.0.0
# Push to Docker Hub
docker push yourusername/devops-web-app:1.0.0
# Pull from Docker Hub
docker pull yourusername/devops-web-app:1.0.0
Private Registry with AWS ECR
# Get login token
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin [aws_account_id].dkr.ecr.us-east-1.amazonaws.com
# Create repository
aws ecr create-repository --repository-name devops-web-app
# Tag for ECR
docker tag devops-web-app:1.0.0 [aws_account_id].dkr.ecr.us-east-1.amazonaws.com/devops-web-app:1.0.0
# Push to ECR
docker push [aws_account_id].dkr.ecr.us-east-1.amazonaws.com/devops-web-app:1.0.0
Docker Security Best Practices
1. Use Official Base Images
# Good
FROM node:18-alpine
# Avoid
FROM random-user/node
2. Non-Root User
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
USER nodejs
3. Minimize Layers
# Good - Single RUN command
RUN apt-get update && \
apt-get install -y package1 package2 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Avoid - Multiple RUN commands
RUN apt-get update
RUN apt-get install -y package1
RUN apt-get install -y package2
4. Use .dockerignore
Create .dockerignore
:
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.vscode
.idea
coverage
.nyc_output
*.log
5. Scan for Vulnerabilities
# Docker Scout (built-in)
docker scout cves devops-web-app:1.0.0
# Trivy scanner
trivy image devops-web-app:1.0.0
# Snyk
snyk container test devops-web-app:1.0.0
Container Orchestration Preview
While Docker Compose works for local development, production requires orchestration:
Docker Swarm: Docker's native orchestration
Kubernetes: Industry standard for container orchestration
Amazon ECS: AWS managed container service
Amazon EKS: AWS managed Kubernetes
We'll explore ECS deployment in Part 7.
Monitoring Docker Containers
Docker Stats
# Real-time stats
docker stats
# Stats for specific container
docker stats web-app
Health Checks
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node healthcheck.js || exit 1
Create healthcheck.js
:
const http = require('http');
const options = {
host: 'localhost',
port: 3000,
path: '/health',
timeout: 2000
};
const request = http.request(options, (res) => {
console.log(`STATUS: ${res.statusCode}`);
if (res.statusCode == 200) {
process.exit(0);
} else {
process.exit(1);
}
});
request.on('error', (err) => {
console.log('ERROR:', err);
process.exit(1);
});
request.end();
Hands-on Exercise: Complete Docker Workflow
Exercise 1: Build Multi-Container Application
Create the application structure
mkdir docker-exercise
cd docker-exercise
Create a Python Flask API (
api/app.py
):
from flask import Flask, jsonify
import redis
import os
app = Flask(__name__)
redis_client = redis.Redis(host='redis', port=6379, decode_responses=True)
@app.route('/api/visits')
def get_visits():
visits = redis_client.incr('visits')
return jsonify({'visits': visits})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
Create Dockerfile for API (
api/Dockerfile
):
FROM python:3.9-alpine
WORKDIR /app
RUN pip install flask redis
COPY app.py .
CMD ["python", "app.py"]
Create docker-compose.yml:
version: '3.8'
services:
api:
build: ./api
ports:
- "5000:5000"
depends_on:
- redis
redis:
image: redis:alpine
Run and test:
docker-compose up -d
curl http://localhost:5000/api/visits
Exercise 2: Optimize Docker Image
Compare image sizes:
# Build unoptimized
docker build -t app:large -f Dockerfile.large .
# Build optimized
docker build -t app:small -f Dockerfile.optimized .
# Compare sizes
docker images | grep app
Troubleshooting Docker
Common Issues and Solutions
Container won't start
docker logs container_name
docker inspect container_name
Port already in use
# Find process using port
lsof -i :3000
# Or change port mapping
docker run -p 3001:3000 image_name
Disk space issues
docker system df
docker system prune -a
Container can't access internet
# Check DNS
docker run busybox nslookup google.com
# Restart Docker daemon
sudo systemctl restart docker
Docker Cheat Sheet
Quick Reference
# Cleanup commands
docker system prune -a # Remove all unused data
docker container prune # Remove stopped containers
docker image prune -a # Remove unused images
docker volume prune # Remove unused volumes
docker network prune # Remove unused networks
# Useful aliases (add to ~/.bashrc)
alias dps='docker ps'
alias dpsa='docker ps -a'
alias di='docker images'
alias drm='docker rm $(docker ps -aq)'
alias drmi='docker rmi $(docker images -q)'
alias dlog='docker logs -f'
alias dexec='docker exec -it'
Key Takeaways
Docker containers provide consistent environments across development, testing, and production
Dockerfiles define how to build images; optimize them for size and security
Docker Compose orchestrates multi-container applications locally
Volumes provide persistent storage for containers
Always follow security best practices: use official images, run as non-root, scan for vulnerabilities
Container registries like Docker Hub and ECR store and distribute images
What's Next?
In Part 4, we'll build our first CI/CD pipeline with GitHub Actions. You'll learn:
GitHub Actions fundamentals
Creating workflows
Automated testing
Building and pushing Docker images
Deployment strategies
Secrets management
Additional Resources
Play with Docker - Online Docker playground