
How to Use Logging in Python 3
Python logging is an essential skill for any developer building production applications, whether you’re deploying on VPS or dedicated servers. Instead of relying on print statements that clutter your output and vanish into the void in production, proper logging gives you control over what gets recorded, where it goes, and how it’s formatted. This guide covers everything from basic setup to advanced configuration, common gotchas that’ll bite you if you’re not careful, and real-world patterns that actually work in production environments.
How Python’s Logging System Works
Python’s logging module follows a hierarchical system with four main components: loggers, handlers, formatters, and filters. Think of loggers as the entry points where your code sends messages, handlers as the destinations (console, files, network), formatters as the styling engine, and filters as the bouncers deciding what gets through.
The logging hierarchy works like a filesystem – if you create a logger named “myapp.database”, it inherits settings from “myapp” and ultimately from the root logger. This inheritance model is powerful but trips up newcomers who wonder why their carefully configured logger suddenly starts spitting out duplicate messages.
import logging
# Create a logger hierarchy
root_logger = logging.getLogger()
app_logger = logging.getLogger('myapp')
db_logger = logging.getLogger('myapp.database')
api_logger = logging.getLogger('myapp.api')
print(f"Root logger: {root_logger.name}")
print(f"App logger parent: {app_logger.parent.name}")
print(f"DB logger parent: {db_logger.parent.name}")
The five standard log levels (DEBUG, INFO, WARNING, ERROR, CRITICAL) have numeric values (10, 20, 30, 40, 50). When you set a logger to WARNING level, it filters out DEBUG and INFO messages. This filtering happens at both the logger and handler level, which can cause confusion when messages don’t appear where you expect them.
Step-by-Step Implementation Guide
Let’s build a proper logging setup from scratch. The basic pattern involves three steps: create a logger, add a handler, and optionally customize the formatter.
import logging
import sys
from datetime import datetime
# Step 1: Create and configure the logger
logger = logging.getLogger('myapp')
logger.setLevel(logging.DEBUG)
# Step 2: Create handlers
console_handler = logging.StreamHandler(sys.stdout)
file_handler = logging.FileHandler('application.log')
# Set different levels for different handlers
console_handler.setLevel(logging.INFO)
file_handler.setLevel(logging.DEBUG)
# Step 3: Create formatters
console_format = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
file_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s')
# Step 4: Apply formatters to handlers
console_handler.setFormatter(console_format)
file_handler.setFormatter(file_format)
# Step 5: Add handlers to logger
logger.addHandler(console_handler)
logger.addHandler(file_handler)
# Test it out
logger.debug('This only goes to the file')
logger.info('This goes to both console and file')
logger.error('Something went wrong!')
For production applications, you’ll want to use configuration files instead of hardcoding everything. Python supports both dictionary-based and file-based configuration:
# logging_config.py
import logging.config
LOGGING_CONFIG = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
},
'detailed': {
'format': '%(asctime)s [%(levelname)s] %(name)s:%(funcName)s:%(lineno)d: %(message)s'
}
},
'handlers': {
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'standard',
'stream': 'ext://sys.stdout'
},
'file': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'formatter': 'detailed',
'filename': 'app.log',
'maxBytes': 10485760, # 10MB
'backupCount': 5
}
},
'loggers': {
'myapp': {
'handlers': ['console', 'file'],
'level': 'DEBUG',
'propagate': False
}
}
}
logging.config.dictConfig(LOGGING_CONFIG)
logger = logging.getLogger('myapp')
Real-World Examples and Use Cases
Here’s a practical example of logging in a web application that handles different scenarios you’ll encounter in production:
import logging
import logging.handlers
import os
import traceback
from functools import wraps
class WebAppLogger:
def __init__(self, app_name, log_dir='/var/log'):
self.app_name = app_name
self.log_dir = log_dir
self.setup_loggers()
def setup_loggers(self):
# Application logger
self.app_logger = logging.getLogger(f'{self.app_name}.app')
self.app_logger.setLevel(logging.DEBUG)
# Access logger (like nginx access logs)
self.access_logger = logging.getLogger(f'{self.app_name}.access')
self.access_logger.setLevel(logging.INFO)
# Error logger for exceptions
self.error_logger = logging.getLogger(f'{self.app_name}.error')
self.error_logger.setLevel(logging.ERROR)
# Setup handlers
self._setup_handlers()
def _setup_handlers(self):
# Rotating file handler for app logs
app_handler = logging.handlers.TimedRotatingFileHandler(
filename=os.path.join(self.log_dir, f'{self.app_name}.log'),
when='midnight',
interval=1,
backupCount=30
)
app_formatter = logging.Formatter(
'%(asctime)s [%(levelname)s] %(name)s:%(funcName)s:%(lineno)d - %(message)s'
)
app_handler.setFormatter(app_formatter)
self.app_logger.addHandler(app_handler)
# Access log handler
access_handler = logging.handlers.TimedRotatingFileHandler(
filename=os.path.join(self.log_dir, f'{self.app_name}_access.log'),
when='midnight',
interval=1,
backupCount=30
)
access_formatter = logging.Formatter('%(asctime)s - %(message)s')
access_handler.setFormatter(access_formatter)
self.access_logger.addHandler(access_handler)
# Error handler with email notifications (commented for demo)
error_handler = logging.handlers.RotatingFileHandler(
filename=os.path.join(self.log_dir, f'{self.app_name}_error.log'),
maxBytes=50*1024*1024, # 50MB
backupCount=5
)
error_formatter = logging.Formatter(
'%(asctime)s [%(levelname)s] %(name)s:%(funcName)s:%(lineno)d\n%(message)s'
)
error_handler.setFormatter(error_formatter)
self.error_logger.addHandler(error_handler)
# Usage example
webapp_logger = WebAppLogger('mywebapp')
def log_exceptions(func):
@wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as e:
webapp_logger.error_logger.error(
f"Exception in {func.__name__}: {str(e)}\n{traceback.format_exc()}"
)
raise
return wrapper
@log_exceptions
def process_user_request(user_id, action):
webapp_logger.access_logger.info(f"User {user_id} requested {action}")
webapp_logger.app_logger.debug(f"Processing {action} for user {user_id}")
# Simulate some processing
if action == 'delete_everything':
raise ValueError("That's not allowed!")
webapp_logger.app_logger.info(f"Successfully processed {action} for user {user_id}")
return "success"
For microservices and distributed systems, you’ll want structured logging with correlation IDs:
import logging
import json
import uuid
from contextvars import ContextVar
# Context variable to track request IDs across async operations
request_id_var: ContextVar[str] = ContextVar('request_id', default='')
class StructuredFormatter(logging.Formatter):
def format(self, record):
log_entry = {
'timestamp': self.formatTime(record),
'level': record.levelname,
'logger': record.name,
'message': record.getMessage(),
'module': record.module,
'function': record.funcName,
'line': record.lineno,
'request_id': request_id_var.get('')
}
# Add exception info if present
if record.exc_info:
log_entry['exception'] = self.formatException(record.exc_info)
# Add any extra fields
for key, value in record.__dict__.items():
if key not in ['name', 'msg', 'args', 'levelname', 'levelno', 'pathname', 'filename',
'module', 'exc_info', 'exc_text', 'stack_info', 'lineno', 'funcName',
'created', 'msecs', 'relativeCreated', 'thread', 'threadName',
'processName', 'process', 'getMessage']:
log_entry[key] = value
return json.dumps(log_entry)
# Setup structured logger
structured_logger = logging.getLogger('structured')
handler = logging.StreamHandler()
handler.setFormatter(StructuredFormatter())
structured_logger.addHandler(handler)
structured_logger.setLevel(logging.INFO)
def set_request_id(req_id=None):
if req_id is None:
req_id = str(uuid.uuid4())
request_id_var.set(req_id)
return req_id
# Usage
def handle_api_request():
request_id = set_request_id()
structured_logger.info("API request started", extra={'endpoint': '/api/users', 'method': 'GET'})
structured_logger.info("Database query executed", extra={'query_time_ms': 45})
structured_logger.info("API request completed", extra={'response_code': 200})
Comparisons with Alternatives
Feature | Python logging | loguru | structlog | print statements |
---|---|---|---|---|
Setup complexity | Medium | Low | Medium-High | None |
Performance overhead | Low | Medium | Low-Medium | Very Low |
Structured logging | Manual | Built-in | Excellent | No |
Log rotation | Built-in | Built-in | Via handlers | No |
Filtering/levels | Excellent | Good | Excellent | No |
Production ready | Yes | Yes | Yes | No |
Memory usage | Low | Medium | Low | Very Low |
The standard logging module wins for most production use cases because it’s battle-tested, has minimal dependencies, and integrates seamlessly with third-party libraries. Loguru is excellent for quick prototypes and applications where ease of use trumps everything else. Structlog shines in high-throughput systems where structured data and performance matter most.
Performance Considerations and Benchmarks
Logging performance matters more than most developers realize. Here’s a simple benchmark showing the impact of different logging patterns:
import logging
import time
import cProfile
# Setup different logging scenarios
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger('benchmark')
# Disable logging to measure pure overhead
logging.disable(logging.CRITICAL)
def benchmark_logging_patterns():
iterations = 100000
# Test 1: Logger with disabled level
logger.setLevel(logging.ERROR)
start = time.time()
for i in range(iterations):
logger.debug(f"Debug message {i}") # Won't be processed
disabled_time = time.time() - start
# Test 2: String formatting in log call
logger.setLevel(logging.DEBUG)
start = time.time()
for i in range(iterations):
logger.debug(f"Debug message {i}") # String formatted every time
format_time = time.time() - start
# Test 3: Lazy formatting
start = time.time()
for i in range(iterations):
logger.debug("Debug message %s", i) # Only formatted if logged
lazy_time = time.time() - start
# Test 4: Guard clauses
start = time.time()
for i in range(iterations):
if logger.isEnabledFor(logging.DEBUG):
logger.debug(f"Debug message {i}")
guard_time = time.time() - start
print(f"Disabled level: {disabled_time:.4f}s")
print(f"String formatting: {format_time:.4f}s")
print(f"Lazy formatting: {lazy_time:.4f}s")
print(f"Guard clauses: {guard_time:.4f}s")
benchmark_logging_patterns()
Key performance insights:
- Disabled log levels have minimal overhead (microseconds per call)
- String formatting with f-strings happens regardless of log level
- Lazy formatting with % or {} only processes when needed
- Guard clauses help with expensive computations but add overhead for simple messages
- File I/O dominates performance – use asynchronous handlers for high-throughput applications
Best Practices and Common Pitfalls
The most common mistake is creating multiple handlers accidentally, leading to duplicate log messages. This happens because of logger inheritance and forgetting to set propagate=False:
# WRONG - This creates duplicate messages
import logging
root_logger = logging.getLogger()
root_logger.addHandler(logging.StreamHandler())
app_logger = logging.getLogger('myapp')
app_logger.addHandler(logging.StreamHandler()) # Duplicate!
app_logger.error("This appears twice")
# RIGHT - Disable propagation or use hierarchy properly
app_logger = logging.getLogger('myapp')
app_logger.addHandler(logging.StreamHandler())
app_logger.propagate = False # Don't send to parent loggers
# OR configure only the root logger
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger('myapp') # Uses root logger's handlers
Another gotcha is forgetting that basicConfig() only works if no handlers exist on the root logger. Libraries often configure logging, breaking your basicConfig calls:
# This pattern is more reliable
import logging
def setup_logging():
root_logger = logging.getLogger()
# Remove any existing handlers
for handler in root_logger.handlers[:]:
root_logger.removeHandler(handler)
# Add your own handler
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
root_logger.addHandler(handler)
root_logger.setLevel(logging.INFO)
setup_logging()
Security considerations matter too. Never log sensitive data, and be careful with user input that might contain format strings:
# DANGEROUS - User input could contain format specifiers
user_input = "Hello %s %s %s"
logger.info(f"User said: {user_input}") # Safe with f-strings
# DANGEROUS - Could cause exceptions or information disclosure
logger.info("User said: %s", user_input) # Safe
# NEVER do this with user input
logger.info(user_input) # Dangerous if user_input contains % formatting
For deployment on production servers, always configure log rotation to prevent disk space issues:
import logging.handlers
import os
def setup_production_logging(app_name, log_dir='/var/log'):
os.makedirs(log_dir, exist_ok=True)
# Time-based rotation for application logs
app_handler = logging.handlers.TimedRotatingFileHandler(
filename=os.path.join(log_dir, f'{app_name}.log'),
when='midnight',
interval=1,
backupCount=30, # Keep 30 days
encoding='utf-8'
)
# Size-based rotation for error logs
error_handler = logging.handlers.RotatingFileHandler(
filename=os.path.join(log_dir, f'{app_name}_error.log'),
maxBytes=100*1024*1024, # 100MB
backupCount=10,
encoding='utf-8'
)
# Set appropriate permissions
for handler in [app_handler, error_handler]:
handler.setFormatter(logging.Formatter(
'%(asctime)s [%(process)d] [%(levelname)s] %(name)s: %(message)s'
))
logging.getLogger(app_name).addHandler(app_handler)
logging.getLogger(f'{app_name}.error').addHandler(error_handler)
Integration with system logging on Linux servers often requires connecting to syslog:
import logging.handlers
# Connect to system syslog
syslog_handler = logging.handlers.SysLogHandler(address='/dev/log')
syslog_formatter = logging.Formatter('myapp[%(process)d]: %(levelname)s - %(message)s')
syslog_handler.setFormatter(syslog_formatter)
logger = logging.getLogger('myapp')
logger.addHandler(syslog_handler)
logger.info("Application started") # Appears in /var/log/messages
Remember that proper logging is infrastructure code – set it up once, early in your application lifecycle, and then forget about it. Your future self debugging production issues at 3 AM will thank you for taking the time to implement comprehensive logging from the start.

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.