
How to Install and Use Docker Compose on latest CentOS
Docker Compose is a game-changer for managing multi-container Docker applications, especially on production servers running CentOS. Instead of juggling multiple docker run commands and trying to remember all those environment variables and network configurations, Compose lets you define your entire application stack in a simple YAML file. This guide walks you through installing Docker Compose on the latest CentOS releases, setting up your first multi-container application, and handling the inevitable hiccups you’ll encounter along the way.
Understanding Docker Compose Architecture
Docker Compose operates as a layer above Docker Engine, orchestrating multiple containers through a declarative configuration approach. Unlike Kubernetes which focuses on cluster orchestration, Compose excels at single-host multi-container applications. The compose file defines services, networks, and volumes, with the Compose runtime translating these definitions into Docker API calls.
The architecture consists of three main components:
- Compose CLI – processes docker-compose.yml files and communicates with Docker daemon
- Docker Engine – handles container lifecycle management
- Container runtime – executes the actual application processes
When you run docker-compose up
, Compose creates isolated environments using project names, automatically handling service discovery through DNS resolution within custom bridge networks.
Prerequisites and System Requirements
Before diving into installation, verify your CentOS system meets the requirements. Docker Compose needs Docker Engine 1.13.1+ and works best with at least 2GB RAM for moderate workloads.
# Check CentOS version
cat /etc/centos-release
# Verify available memory
free -h
# Check Docker installation
docker --version
systemctl status docker
If Docker isn’t installed yet, you’ll need to set it up first:
# Install Docker on CentOS
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo systemctl enable docker
# Add your user to docker group (logout/login required)
sudo usermod -aG docker $USER
Installing Docker Compose on CentOS
There are several ways to install Docker Compose on CentOS, each with distinct advantages. Here’s the breakdown:
Method | Pros | Cons | Best For |
---|---|---|---|
Binary Download | Latest version, simple | Manual updates | Production servers |
pip install | Easy updates | Python dependency conflicts | Development environments |
Package Manager | System integration | Often outdated versions | Corporate environments |
Method 1: Binary Installation (Recommended)
This method downloads the latest stable release directly from GitHub and provides the most reliable installation:
# Download latest Docker Compose binary
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# Make it executable
sudo chmod +x /usr/local/bin/docker-compose
# Create symlink for easier access
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
# Verify installation
docker-compose --version
Method 2: Python pip Installation
If you prefer managing Docker Compose through Python’s package manager:
# Install Python pip if not available
sudo yum install python3-pip
# Install Docker Compose via pip
sudo pip3 install docker-compose
# Verify installation
docker-compose --version
Creating Your First Docker Compose Application
Let’s build a practical multi-tier web application to demonstrate Docker Compose capabilities. This example includes a Python Flask web app, Redis cache, and PostgreSQL database – a common architecture pattern you’ll encounter in real projects.
First, create the project structure:
mkdir webapp-stack && cd webapp-stack
mkdir app
# Create the Flask application
cat > app/app.py << 'EOF'
from flask import Flask, jsonify
import redis
import psycopg2
import os
app = Flask(__name__)
redis_client = redis.Redis(host='redis', port=6379, decode_responses=True)
@app.route('/')
def hello():
count = redis_client.incr('hits')
return jsonify({
'message': f'Hello! This page has been visited {count} times',
'status': 'success'
})
@app.route('/health')
def health():
try:
# Test Redis connection
redis_client.ping()
# Test PostgreSQL connection
conn = psycopg2.connect(
host="postgres",
database=os.environ['POSTGRES_DB'],
user=os.environ['POSTGRES_USER'],
password=os.environ['POSTGRES_PASSWORD']
)
conn.close()
return jsonify({'status': 'healthy', 'services': ['redis', 'postgres']})
except Exception as e:
return jsonify({'status': 'unhealthy', 'error': str(e)}), 500
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000, debug=True)
EOF
# Create requirements file
cat > app/requirements.txt << 'EOF'
Flask==2.3.3
redis==4.6.0
psycopg2-binary==2.9.7
EOF
# Create Dockerfile for the web app
cat > app/Dockerfile << 'EOF'
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
EOF
Now create the docker-compose.yml file that ties everything together:
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
web:
build: ./app
ports:
- "5000:5000"
environment:
- POSTGRES_DB=webapp
- POSTGRES_USER=webuser
- POSTGRES_PASSWORD=webpass123
depends_on:
- redis
- postgres
restart: unless-stopped
volumes:
- ./app:/app
networks:
- webapp-network
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis-data:/data
networks:
- webapp-network
restart: unless-stopped
postgres:
image: postgres:15-alpine
environment:
- POSTGRES_DB=webapp
- POSTGRES_USER=webuser
- POSTGRES_PASSWORD=webpass123
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- webapp-network
restart: unless-stopped
volumes:
redis-data:
postgres-data:
networks:
webapp-network:
driver: bridge
EOF
Running and Managing Your Application Stack
With everything configured, let's bring up the application stack and explore essential management commands:
# Start all services in detached mode
docker-compose up -d
# View running services
docker-compose ps
# Check service logs
docker-compose logs web
docker-compose logs -f redis # Follow logs in real-time
# Scale a specific service
docker-compose up -d --scale web=3
# Execute commands in running containers
docker-compose exec web python -c "import redis; print(redis.__version__)"
docker-compose exec postgres psql -U webuser -d webapp
# Stop all services
docker-compose stop
# Stop and remove containers, networks
docker-compose down
# Remove everything including volumes
docker-compose down -v
Test your application by accessing the endpoints:
# Test the main endpoint
curl http://localhost:5000/
# Check health status
curl http://localhost:5000/health
# Monitor Redis data
docker-compose exec redis redis-cli monitor
Real-world Use Cases and Applications
Docker Compose shines in several scenarios where single-host orchestration makes sense:
- Development environments - Replicate production infrastructure locally with consistent configurations across team members
- CI/CD pipelines - Spin up test environments rapidly for integration testing
- Small to medium production deployments - Single server applications with multiple microservices
- Edge computing - Lightweight orchestration for IoT gateways and edge nodes
- Staging environments - Cost-effective pre-production testing with identical service configurations
Here's a more complex real-world example - an e-commerce backend with monitoring:
cat > ecommerce-compose.yml << 'EOF'
version: '3.8'
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/nginx/ssl
depends_on:
- api
- frontend
api:
build: ./backend
environment:
- DATABASE_URL=postgresql://postgres:secret@postgres:5432/ecommerce
- REDIS_URL=redis://redis:6379
- JWT_SECRET=your-jwt-secret-here
depends_on:
- postgres
- redis
deploy:
replicas: 2
frontend:
build: ./frontend
environment:
- API_BASE_URL=http://api:3000
postgres:
image: postgres:15
environment:
- POSTGRES_DB=ecommerce
- POSTGRES_PASSWORD=secret
volumes:
- postgres-data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
redis:
image: redis:7-alpine
volumes:
- redis-data:/data
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
grafana:
image: grafana/grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana-data:/var/lib/grafana
volumes:
postgres-data:
redis-data:
grafana-data:
EOF
Comparing Docker Compose with Alternatives
Understanding when to use Docker Compose versus other orchestration tools helps make informed architecture decisions:
Tool | Best Use Case | Learning Curve | Scalability | Production Ready |
---|---|---|---|---|
Docker Compose | Single-host applications | Low | Limited | Small/Medium apps |
Kubernetes | Multi-host clusters | High | Excellent | Enterprise grade |
Docker Swarm | Simple clustering | Medium | Good | Moderate complexity |
Podman Compose | Rootless containers | Low | Limited | Security-focused |
Performance-wise, Docker Compose introduces minimal overhead compared to running containers directly. Benchmarks show approximately 2-3% CPU overhead and negligible memory impact for the orchestration layer itself.
Best Practices and Security Considerations
Following established patterns will save you countless hours of debugging and security headaches:
Configuration Management
# Use environment files for sensitive data
cat > .env << 'EOF'
POSTGRES_PASSWORD=your-secure-password-here
JWT_SECRET=your-jwt-secret
API_KEY=your-api-key
EOF
# Reference in docker-compose.yml
services:
app:
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
env_file:
- .env
Security Best Practices
- Never expose database ports to the host unless absolutely necessary
- Use specific image tags instead of 'latest' for reproducible builds
- Run containers as non-root users when possible
- Implement health checks for all services
- Use secrets management for production deployments
# Example with security improvements
version: '3.8'
services:
web:
image: myapp:1.2.3 # Specific version
user: "1000:1000" # Non-root user
read_only: true # Read-only filesystem
tmpfs:
- /tmp
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
secrets:
- db_password
secrets:
db_password:
file: ./secrets/db_password.txt
Performance Optimization
# Optimize for production
services:
web:
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Troubleshooting Common Issues
Even experienced developers encounter these typical Docker Compose problems. Here are the solutions that actually work:
Port Conflicts
# Error: Port already in use
# Solution: Check what's using the port
sudo netstat -tulpn | grep :5432
sudo ss -tulpn | grep :5432
# Kill the conflicting process or change ports in compose file
services:
postgres:
ports:
- "5433:5432" # Use different host port
Network Connectivity Issues
# Debug network connectivity between services
docker-compose exec web ping postgres
docker-compose exec web nslookup redis
# Inspect Docker networks
docker network ls
docker network inspect webapp-stack_default
# Force network recreation
docker-compose down
docker network prune
docker-compose up -d
Volume Permission Problems
# Fix common volume permission issues
# Method 1: Use init containers
services:
init:
image: alpine
command: chown -R 1000:1000 /data
volumes:
- app-data:/data
app:
depends_on:
- init
volumes:
- app-data:/app/data
# Method 2: Set user in Dockerfile
# In your Dockerfile:
RUN adduser -D -s /bin/sh appuser
USER appuser
Memory and Resource Issues
# Monitor resource usage
docker-compose top
docker stats
# Add resource limits to prevent one service from consuming everything
services:
database:
image: postgres:15
deploy:
resources:
limits:
memory: 1G
reservations:
memory: 512M
For comprehensive troubleshooting and advanced configuration options, consult the official Docker Compose documentation. The Docker Compose GitHub repository also contains valuable examples and community-contributed solutions for complex scenarios.
Docker Compose transforms complex multi-container applications into manageable, reproducible deployments. While it's not a replacement for full-scale orchestration platforms like Kubernetes, it excels in development environments and single-host production scenarios. The key to success lies in understanding its limitations, following security best practices, and leveraging its strengths for appropriate use cases. Start with simple configurations and gradually add complexity as your application requirements evolve.

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.