
Using Environment Variables in Docker Containers
Environment variables in Docker containers are one of those things that seem trivial until you need to manage configuration across different environments, handle secrets properly, or troubleshoot why your app works locally but fails in production. They’re the bridge between your containerized applications and the external world, allowing you to inject configuration, API keys, database URLs, and feature flags without rebuilding images. This post walks through the practical aspects of using environment variables in Docker – from basic setup to advanced patterns, common gotchas that’ll bite you in production, and security considerations that could save your job.
How Environment Variables Work in Docker
Docker handles environment variables through multiple layers. When you run a container, Docker creates an isolated process namespace where environment variables exist as key-value pairs accessible to any process running inside that container. These variables can come from several sources: the Dockerfile, docker run commands, docker-compose files, or the host system itself.
The container’s environment is established at startup and remains consistent throughout the container’s lifecycle. This makes environment variables perfect for configuration that needs to be determined at deployment time but shouldn’t change while the container is running.
Here’s what happens under the hood:
- Docker reads environment variables from various sources in a specific precedence order
- Variables are injected into the container’s process environment during initialization
- Applications running inside the container can access these variables using standard system calls
- Child processes inherit the environment from their parent process
Basic Implementation and Setup
Let’s start with the fundamental ways to set environment variables in Docker containers. The most straightforward approach uses the -e
flag with docker run:
docker run -e DATABASE_URL=postgresql://user:pass@localhost/mydb \
-e API_KEY=your-secret-key \
-e DEBUG=true \
my-app:latest
For multiple variables, you can use an environment file instead:
# Create .env file
DATABASE_URL=postgresql://user:pass@localhost/mydb
API_KEY=your-secret-key
DEBUG=true
LOG_LEVEL=info
# Use the file
docker run --env-file .env my-app:latest
In your Dockerfile, you can set default values using the ENV instruction:
FROM node:16
ENV NODE_ENV=production
ENV PORT=3000
ENV LOG_LEVEL=info
COPY . /app
WORKDIR /app
CMD ["node", "server.js"]
For docker-compose, environment variables become much more manageable:
version: '3.8'
services:
web:
build: .
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/myapp
- REDIS_URL=redis://redis:6379
- NODE_ENV=production
env_file:
- .env
db:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
Real-World Examples and Use Cases
Let’s look at some practical scenarios where environment variables shine. Here’s a Python Flask application that demonstrates common patterns:
# app.py
import os
from flask import Flask
app = Flask(__name__)
# Configuration from environment variables
app.config['DATABASE_URL'] = os.getenv('DATABASE_URL', 'sqlite:///default.db')
app.config['SECRET_KEY'] = os.getenv('SECRET_KEY', 'dev-key-change-in-prod')
app.config['DEBUG'] = os.getenv('DEBUG', 'False').lower() == 'true'
app.config['PORT'] = int(os.getenv('PORT', 5000))
# Feature flags
ENABLE_LOGGING = os.getenv('ENABLE_LOGGING', 'true').lower() == 'true'
MAX_UPLOAD_SIZE = int(os.getenv('MAX_UPLOAD_SIZE', 16777216)) # 16MB default
@app.route('/health')
def health():
return {
'status': 'healthy',
'environment': os.getenv('NODE_ENV', 'development'),
'debug': app.config['DEBUG']
}
if __name__ == '__main__':
app.run(host='0.0.0.0', port=app.config['PORT'], debug=app.config['DEBUG'])
The corresponding Dockerfile and environment setup:
# Dockerfile
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
# docker-compose.yml for different environments
version: '3.8'
services:
web:
build: .
ports:
- "5000:5000"
environment:
- DATABASE_URL=postgresql://postgres:password@db:5432/myapp
- SECRET_KEY=${SECRET_KEY}
- DEBUG=false
- ENABLE_LOGGING=true
- MAX_UPLOAD_SIZE=52428800 # 50MB for production
depends_on:
- db
- redis
db:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:6-alpine
volumes:
postgres_data:
For a Node.js microservice handling different deployment environments:
// config.js
module.exports = {
port: parseInt(process.env.PORT) || 3000,
nodeEnv: process.env.NODE_ENV || 'development',
database: {
host: process.env.DB_HOST || 'localhost',
port: parseInt(process.env.DB_PORT) || 5432,
name: process.env.DB_NAME || 'myapp',
user: process.env.DB_USER || 'user',
password: process.env.DB_PASSWORD || 'password',
ssl: process.env.DB_SSL === 'true',
poolSize: parseInt(process.env.DB_POOL_SIZE) || 10
},
redis: {
url: process.env.REDIS_URL || 'redis://localhost:6379',
ttl: parseInt(process.env.CACHE_TTL) || 3600
},
auth: {
jwtSecret: process.env.JWT_SECRET || 'dev-secret',
tokenExpiry: process.env.TOKEN_EXPIRY || '24h'
},
features: {
rateLimiting: process.env.ENABLE_RATE_LIMITING !== 'false',
logging: process.env.LOG_LEVEL || 'info',
metrics: process.env.ENABLE_METRICS === 'true'
}
};
Environment Variable Sources and Precedence
Understanding the order of precedence is crucial when environment variables conflict. Docker resolves variables in this order (highest to lowest priority):
Priority | Source | Command/Method | Use Case |
---|---|---|---|
1 (Highest) | Command line -e flag | docker run -e VAR=value |
Override for testing/debugging |
2 | Environment file | docker run --env-file |
Environment-specific configs |
3 | docker-compose environment | environment: section |
Service-specific settings |
4 | docker-compose env_file | env_file: directive |
Shared configuration |
5 (Lowest) | Dockerfile ENV | ENV VAR=value |
Default values |
Here’s a practical example showing precedence in action:
# Dockerfile
ENV API_URL=http://localhost:3000
ENV DEBUG=false
# .env file
API_URL=https://staging-api.example.com
LOG_LEVEL=debug
# docker-compose.yml
services:
app:
build: .
env_file: .env
environment:
- DEBUG=true # This overrides both Dockerfile and .env file
The resulting environment inside the container:
API_URL=https://staging-api.example.com
(from .env file)DEBUG=true
(from docker-compose environment, overrides Dockerfile)LOG_LEVEL=debug
(from .env file)
Advanced Patterns and Best Practices
Variable substitution in docker-compose files enables dynamic configuration based on the host environment:
# Host environment
export DATABASE_PASSWORD=super-secret-password
export APP_VERSION=1.2.3
# docker-compose.yml
version: '3.8'
services:
app:
image: myapp:${APP_VERSION:-latest}
environment:
- DATABASE_URL=postgresql://user:${DATABASE_PASSWORD}@db:5432/myapp
- BUILD_DATE=${BUILD_DATE:-unknown}
- GIT_COMMIT=${GITHUB_SHA:-local}
For complex applications, consider using a configuration container pattern:
version: '3.8'
services:
config:
image: busybox
volumes:
- config_data:/config
command: |
sh -c "
echo 'DATABASE_URL=postgresql://user:pass@db:5432/prod' > /config/app.env
echo 'REDIS_URL=redis://redis:6379' >> /config/app.env
echo 'LOG_LEVEL=info' >> /config/app.env
"
app:
image: myapp:latest
depends_on:
- config
volumes:
- config_data:/config
command: sh -c "source /config/app.env && exec python app.py"
volumes:
config_data:
Multi-stage Docker builds can use build-time variables that don’t leak into the final image:
FROM node:16 AS builder
ARG BUILD_ENV=production
ARG API_ENDPOINT
ENV REACT_APP_API_URL=${API_ENDPOINT}
ENV NODE_ENV=${BUILD_ENV}
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html
ENV NGINX_PORT=80
EXPOSE 80
# Build with arguments
docker build --build-arg API_ENDPOINT=https://api.prod.com \
--build-arg BUILD_ENV=production \
-t myapp:prod .
Security Considerations and Secret Management
Environment variables are visible to anyone who can inspect your containers, which makes them unsuitable for sensitive data without proper precautions. Here’s what you need to know:
Never put secrets directly in Dockerfiles or docker-compose files that get committed to version control:
# BAD - Don't do this
ENV DATABASE_PASSWORD=super-secret-123
ENV API_KEY=sk-1234567890abcdef
Use Docker secrets for sensitive data in production:
# Create secrets
echo "my-db-password" | docker secret create db_password -
echo "my-api-key" | docker secret create api_key -
# docker-compose.yml with secrets
version: '3.8'
services:
app:
image: myapp:latest
secrets:
- db_password
- api_key
environment:
- DATABASE_USER=myuser
- DATABASE_HOST=db
# Secrets are mounted as files in /run/secrets/
secrets:
db_password:
external: true
api_key:
external: true
Application code to read secrets:
# Python example
import os
def get_secret(secret_name):
try:
with open(f'/run/secrets/{secret_name}', 'r') as secret_file:
return secret_file.read().strip()
except IOError:
# Fallback to environment variable for development
return os.getenv(secret_name.upper())
# Usage
database_password = get_secret('db_password')
api_key = get_secret('api_key')
For development environments, use separate .env files that aren’t committed:
# .env.example (committed to repo)
DATABASE_URL=postgresql://user:password@localhost:5432/myapp
API_KEY=your-api-key-here
DEBUG=true
# .env (gitignored, actual secrets)
DATABASE_URL=postgresql://user:real-password@localhost:5432/myapp
API_KEY=sk-real-api-key-here
DEBUG=true
Integration with external secret management systems:
# Using HashiCorp Vault
version: '3.8'
services:
app:
image: myapp:latest
environment:
- VAULT_ADDR=https://vault.example.com
- VAULT_TOKEN_FILE=/vault/token
volumes:
- vault_token:/vault
command: |
sh -c "
export DATABASE_PASSWORD=$$(vault kv get -field=password secret/myapp/db)
export API_KEY=$$(vault kv get -field=key secret/myapp/api)
exec python app.py
"
volumes:
vault_token:
external: true
Common Pitfalls and Troubleshooting
Variable interpolation in docker-compose can be tricky. Undefined variables silently become empty strings:
# If UNDEFINED_VAR is not set in host environment
services:
app:
image: myapp:latest
environment:
- API_URL=https://api.${UNDEFINED_VAR}.com # Results in https://api..com
Use default values and validation:
# Better approach
services:
app:
image: myapp:latest
environment:
- API_URL=https://api.${API_ENVIRONMENT:-staging}.com
- DATABASE_URL=${DATABASE_URL?error: DATABASE_URL not set}
Debugging environment variable issues:
# Check what environment variables are actually set
docker run --rm myapp:latest env | sort
# Interactive debugging
docker run -it --env-file .env myapp:latest /bin/bash
# Inside container, check specific variables
echo $DATABASE_URL
printenv | grep -i database
Watch out for variable expansion timing in shell commands:
# Problem: Variable expanded on host, not in container
docker run myapp:latest sh -c "echo $HOME" # Shows host's $HOME
# Solution: Escape or use single quotes
docker run myapp:latest sh -c "echo \$HOME" # Shows container's $HOME
docker run myapp:latest sh -c 'echo $HOME' # Shows container's $HOME
Boolean environment variables are strings, not actual booleans:
# JavaScript
const debug = process.env.DEBUG === 'true'; // Correct
const debug = process.env.DEBUG; // Wrong - always truthy if set, even "false"
# Python
import os
debug = os.getenv('DEBUG', 'false').lower() == 'true' # Correct
debug = bool(os.getenv('DEBUG')) # Wrong - always True if set
Numeric variables need explicit conversion:
# Node.js
const port = parseInt(process.env.PORT) || 3000;
const timeout = parseFloat(process.env.TIMEOUT) || 30.0;
# Python
import os
port = int(os.getenv('PORT', 3000))
timeout = float(os.getenv('TIMEOUT', 30.0))
Performance and Monitoring Considerations
Environment variables have minimal performance impact, but there are some considerations for large-scale deployments:
Aspect | Impact | Recommendation |
---|---|---|
Memory Usage | ~1KB per 100 variables | Negligible for most applications |
Startup Time | Microseconds per variable | No practical impact |
Security Visibility | Visible in process lists | Use secrets for sensitive data |
Change Frequency | Requires container restart | Use config files for frequent changes |
Monitoring environment variable changes and their impact:
# Health check that validates required environment variables
# healthcheck.py
import os
import sys
required_vars = [
'DATABASE_URL',
'API_KEY',
'REDIS_URL'
]
optional_vars = {
'LOG_LEVEL': 'info',
'TIMEOUT': '30',
'MAX_CONNECTIONS': '100'
}
def check_environment():
missing = []
for var in required_vars:
if not os.getenv(var):
missing.append(var)
if missing:
print(f"Missing required environment variables: {', '.join(missing)}")
sys.exit(1)
print("Environment validation passed")
for var in required_vars:
print(f" {var}=***") # Don't log actual values
for var, default in optional_vars.items():
value = os.getenv(var, default)
print(f" {var}={value}")
if __name__ == '__main__':
check_environment()
Add this to your container startup:
# Dockerfile
COPY healthcheck.py /app/
CMD ["sh", "-c", "python healthcheck.py && exec python app.py"]
For production monitoring, consider logging environment variable checksums to detect configuration drift:
# config-monitor.py
import os
import hashlib
import json
def get_config_hash():
config_vars = {k: v for k, v in os.environ.items()
if k.startswith(('APP_', 'DATABASE_', 'REDIS_', 'LOG_'))}
# Sort for consistent hashing
config_str = json.dumps(sorted(config_vars.items()))
return hashlib.sha256(config_str.encode()).hexdigest()[:8]
print(f"Configuration hash: {get_config_hash()}")
Environment variables remain one of the most reliable ways to configure Docker containers. They’re simple, universally supported, and integrate well with orchestration platforms. The key is understanding their limitations – particularly around security and change management – and designing your configuration strategy accordingly. When combined with proper secret management and validation patterns, environment variables provide a solid foundation for container configuration that scales from development to production.
For more advanced deployment scenarios, consider pairing environment variables with service mesh configuration or external configuration services. Check out the official Docker documentation for additional details on environment variable handling, and explore twelve-factor app methodology for broader configuration management principles.
Whether you’re running containers locally or deploying to production infrastructure like VPS or dedicated servers, mastering environment variable patterns will make your containerized applications more maintainable and secure.

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.