
How to Write Comments in Python 3 – Best Practices
Comments are the unsung heroes of maintainable Python code – they explain the ‘why’ behind your logic, document complex algorithms, and help future developers (including yourself) understand what the hell you were thinking at 3 AM six months ago. This guide covers everything from basic syntax to professional commenting strategies that’ll make your code readable, maintainable, and actually useful for teams working on VPS deployments and dedicated server environments where code clarity can mean the difference between smooth operations and midnight debugging sessions.
How Python Comments Work Under the Hood
Python’s comment system is elegantly simple. The interpreter treats anything after a hash symbol (#) as a comment until the end of that line, completely ignoring it during execution. This means zero performance overhead – comments don’t slow down your code execution at all.
Here’s the basic syntax breakdown:
# This is a single-line comment
print("Hello World") # This is an inline comment
"""
This is a multi-line comment
using triple quotes. Technically it's a string literal,
but when not assigned to a variable, it acts like a comment.
"""
def example_function():
"""This is a docstring - a special type of comment for documenting functions."""
pass
The Python tokenizer handles comments during the lexical analysis phase, stripping them out before the parser even sees the code. Multi-line comments using triple quotes are actually string literals that get parsed but aren’t assigned, so they’re effectively ignored at runtime.
Step-by-Step Implementation Guide
Basic Comment Types
Start with these fundamental comment patterns:
# 1. Explanatory comments - explain complex logic
def fibonacci(n):
# Handle edge cases for the first two numbers
if n <= 1:
return n
# Use iterative approach for better performance than recursion
a, b = 0, 1
for _ in range(2, n + 1):
a, b = b, a + b
return b
# 2. TODO comments - mark future improvements
def process_data(data):
# TODO: Add input validation for malformed data
# TODO: Implement caching for repeated queries
return data.strip().lower()
# 3. Inline comments - clarify specific lines
user_score = (correct_answers / total_questions) * 100 # Convert to percentage
# 4. Section dividers - organize code blocks
# ============================================================================
# Database Connection Functions
# ============================================================================
def connect_db():
pass
Professional Docstring Implementation
Docstrings follow specific conventions that tools like Sphinx can parse for documentation generation:
def calculate_server_load(cpu_usage, memory_usage, disk_io, network_io):
"""
Calculate overall server load based on system metrics.
This function combines multiple system metrics to provide a normalized
load score useful for auto-scaling decisions in VPS environments.
Args:
cpu_usage (float): CPU utilization percentage (0-100)
memory_usage (float): Memory utilization percentage (0-100)
disk_io (float): Disk I/O operations per second
network_io (float): Network throughput in MB/s
Returns:
float: Normalized load score (0-1), where 1 indicates maximum load
Raises:
ValueError: If any usage percentage is outside 0-100 range
Example:
>>> calculate_server_load(75.5, 60.2, 150.0, 25.3)
0.678
"""
if not (0 <= cpu_usage <= 100) or not (0 <= memory_usage <= 100):
raise ValueError("Usage percentages must be between 0 and 100")
# Weight factors based on typical server bottlenecks
cpu_weight = 0.4
memory_weight = 0.3
disk_weight = 0.2
network_weight = 0.1
# Normalize disk and network metrics (example thresholds)
disk_normalized = min(disk_io / 500.0, 1.0) # 500 IOPS = max
network_normalized = min(network_io / 100.0, 1.0) # 100 MB/s = max
load_score = (
(cpu_usage / 100.0) * cpu_weight +
(memory_usage / 100.0) * memory_weight +
disk_normalized * disk_weight +
network_normalized * network_weight
)
return round(load_score, 3)
Real-World Examples and Use Cases
Server Configuration Scripts
Here's how to comment server setup scripts effectively:
#!/usr/bin/env python3
"""
VPS Initial Setup Script
========================
Automates the initial configuration of a new VPS instance including:
- Security hardening
- Essential package installation
- User account setup
- Firewall configuration
Usage: python3 vps_setup.py --config config.yaml
"""
import subprocess
import logging
from pathlib import Path
# Configure logging for setup process tracking
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('/var/log/vps_setup.log'),
logging.StreamHandler()
]
)
def harden_ssh_config():
"""
Modify SSH configuration for enhanced security.
Changes applied:
- Disable root login
- Change default port
- Enable key-based authentication only
"""
ssh_config_path = Path('/etc/ssh/sshd_config')
# Backup original configuration before modification
backup_path = ssh_config_path.with_suffix('.bak')
if not backup_path.exists():
subprocess.run(['cp', str(ssh_config_path), str(backup_path)])
logging.info(f"Created SSH config backup: {backup_path}")
# Security settings to apply
security_settings = [
'PermitRootLogin no',
'PasswordAuthentication no',
'Port 2222', # Non-standard port to reduce automated attacks
'Protocol 2',
'MaxAuthTries 3'
]
# Apply each security setting
for setting in security_settings:
# Use sed to replace or append settings
key = setting.split()[0]
subprocess.run([
'sed', '-i', f'/^{key}/c\\{setting}', str(ssh_config_path)
])
logging.info(f"Applied SSH setting: {setting}")
# Restart SSH service to apply changes
subprocess.run(['systemctl', 'restart', 'ssh'])
logging.info("SSH service restarted with new configuration")
Database Management Comments
class DatabaseManager:
"""
Handles database connections and query optimization for high-traffic applications.
Designed for use in dedicated server environments where connection pooling
and query caching are critical for performance.
"""
def __init__(self, connection_string, pool_size=20):
self.connection_string = connection_string
self.pool_size = pool_size
# Initialize connection pool
# Note: Using a larger pool size for dedicated servers with more RAM
self._connection_pool = self._create_pool()
# Query cache to reduce database load
# TTL set to 300 seconds for frequently accessed data
self._query_cache = {} # Format: {query_hash: (result, timestamp)}
self.cache_ttl = 300
def execute_query(self, query, params=None, use_cache=True):
"""
Execute SQL query with optional caching.
Performance considerations:
- Uses prepared statements to prevent SQL injection
- Implements query result caching for SELECT operations
- Connection pooling reduces overhead for repeated queries
Args:
query (str): SQL query string
params (tuple): Query parameters for prepared statement
use_cache (bool): Whether to use query result caching
Returns:
list: Query results as list of dictionaries
"""
# Generate cache key for SELECT queries only
if use_cache and query.strip().upper().startswith('SELECT'):
cache_key = hash(query + str(params or ''))
# Check if cached result exists and is still valid
if cache_key in self._query_cache:
result, timestamp = self._query_cache[cache_key]
if time.time() - timestamp < self.cache_ttl:
return result # Return cached result
else:
# Cache expired, remove stale entry
del self._query_cache[cache_key]
# Execute query against database
connection = self._get_connection()
try:
cursor = connection.cursor()
cursor.execute(query, params or ())
if query.strip().upper().startswith('SELECT'):
result = cursor.fetchall()
# Cache the result for future use
if use_cache:
self._query_cache[cache_key] = (result, time.time())
return result
else:
# For INSERT, UPDATE, DELETE operations
connection.commit()
return cursor.rowcount
finally:
self._return_connection(connection)
Comment Types Comparison
Comment Type | Syntax | Use Case | Performance Impact | Tool Support |
---|---|---|---|---|
Single-line | # Comment text | Quick explanations, TODOs | None | All editors |
Inline | code # Comment | Line-specific clarification | None | All editors |
Multi-line string | """Comment block""" | Large comment blocks | Minimal (string creation) | Limited parsing |
Docstrings | """Function docs""" | API documentation | Minimal (stored in __doc__) | Sphinx, IDEs, help() |
Type hints | def func(x: int) -> str: | Parameter/return documentation | None (runtime) | mypy, IDEs |
Best Practices and Common Pitfalls
Do's: Professional Comment Standards
- Explain the 'why', not the 'what': Your code should be self-explanatory for what it does, comments should explain reasoning and context
- Update comments with code changes: Outdated comments are worse than no comments
- Use consistent formatting: Establish team standards for comment style and stick to them
- Document complex algorithms: Include time/space complexity and algorithm choice reasoning
- Add context for magic numbers: Explain where constants come from and why they're used
# GOOD: Explains reasoning and context
def retry_api_call(func, max_retries=3):
"""
Retry API calls with exponential backoff.
Uses exponential backoff to handle temporary network issues
and API rate limiting common in VPS environments with shared resources.
"""
for attempt in range(max_retries):
try:
return func()
except APIException as e:
if attempt == max_retries - 1:
raise
# Wait longer between retries to respect rate limits
wait_time = 2 ** attempt # 1s, 2s, 4s progression
time.sleep(wait_time)
# BAD: States the obvious
def retry_api_call(func, max_retries=3):
# Loop through the number of max retries
for attempt in range(max_retries):
try:
# Call the function
return func()
except APIException as e:
# If this is the last attempt
if attempt == max_retries - 1:
# Raise the exception
raise
# Sleep for some time
time.sleep(2 ** attempt)
Don'ts: Avoid These Comment Anti-Patterns
- Don't comment bad code, rewrite it: If you need extensive comments to explain confusing code, refactor instead
- Avoid redundant comments: Don't repeat what the code clearly shows
- Don't use comments for version control: That's what Git is for
- Avoid misleading comments: Wrong comments are dangerous
- Don't comment out code for "later use": Delete it and use version control
# AVOID: These comment anti-patterns
# Redundant commenting
x = x + 1 # Increment x by 1
# Version control in comments (use Git instead)
def process_data(data):
# Old version - removed 2023-03-15
# return data.upper()
# New version - added 2023-03-15
# Updated version - modified 2023-03-20
return data.strip().lower()
# Commented-out code (delete it instead)
def calculate_total(items):
total = 0
for item in items:
total += item.price
# Old calculation method - keep just in case
# total = sum([item.price for item in items])
# if discount:
# total *= 0.9
return total
Advanced Comment Strategies
For large-scale applications running on dedicated servers, use structured commenting approaches:
"""
Server Monitoring Module
========================
This module provides comprehensive server monitoring capabilities
for dedicated server environments with the following features:
Architecture:
- Event-driven monitoring using asyncio
- Pluggable metric collectors
- Configurable alerting thresholds
- JSON-based configuration system
Performance Characteristics:
- Memory usage: ~50MB baseline + 10MB per monitored service
- CPU overhead: <2% on typical workloads
- Network overhead: ~1KB/minute per metric
Dependencies:
- psutil>=5.8.0 for system metrics
- aiohttp>=3.8.0 for HTTP monitoring
- redis>=4.0.0 for metric storage (optional)
"""
class MetricCollector:
"""
Base class for all metric collectors.
Metric collectors should inherit from this class and implement
the collect() method. The monitoring system will call collect()
at regular intervals based on the collector's configured frequency.
Thread Safety:
All collector implementations must be thread-safe as they
may be called concurrently from multiple monitoring threads.
Error Handling:
Collectors should catch and log their own exceptions. Uncaught
exceptions will disable the collector to prevent system instability.
"""
def __init__(self, name, frequency=60):
"""
Initialize metric collector.
Args:
name (str): Unique identifier for this collector
frequency (int): Collection interval in seconds
Recommended values:
- System metrics: 30-60 seconds
- Application metrics: 60-300 seconds
- Network metrics: 30 seconds
Note:
Lower frequencies increase system overhead but provide
better granularity for alerting and trend analysis.
"""
self.name = name
self.frequency = frequency
self.last_collection = 0
self.error_count = 0
# Disable collector after 5 consecutive errors to prevent
# log spam and resource waste
self.max_errors = 5
self.enabled = True
async def collect(self):
"""
Collect metrics and return as dictionary.
This method must be implemented by subclasses.
Returns:
dict: Metric data with timestamps
Format: {
'timestamp': unix_timestamp,
'metrics': {
'metric_name': numeric_value,
...
}
}
Raises:
NotImplementedError: If not overridden by subclass
"""
raise NotImplementedError("Subclasses must implement collect() method")
Integration with Development Tools
Modern Python development relies on tools that parse and utilize comments effectively. Here's how to write comments that work well with popular tools:
IDE Integration
# PyCharm and VS Code recognize these special comment patterns:
# TODO: Implement user authentication
# FIXME: Handle edge case where user_id is None
# NOTE: This function has O(n²) complexity
# HACK: Temporary workaround for API rate limiting
# WARNING: Modifying this affects cache invalidation
def user_profile_data(user_id):
"""
Retrieve user profile information with caching.
:param user_id: Unique user identifier
:type user_id: int
:return: User profile dictionary
:rtype: dict
:raises ValueError: If user_id is invalid
:raises ConnectionError: If database is unreachable
.. note::
This function implements aggressive caching. Cache TTL
is 1 hour for profile data, 5 minutes for preference data.
.. warning::
Direct database queries bypass the cache. Use the provided
methods for consistency.
"""
pass
Documentation Generation
Sphinx and other documentation tools can parse specially formatted comments. Check the official Sphinx documentation for complete formatting guidelines:
def deploy_application(app_config, target_server, rollback_enabled=True):
"""
Deploy application to target server with optional rollback capability.
This function handles the complete deployment pipeline including:
1. Pre-deployment validation
2. Code package upload
3. Service configuration
4. Health check verification
5. Rollback preparation (if enabled)
Parameters
----------
app_config : dict
Application configuration dictionary containing:
* ``name`` (str) -- Application name
* ``version`` (str) -- Version to deploy
* ``port`` (int) -- Service port number
* ``health_check_url`` (str) -- URL for health verification
target_server : str
Server hostname or IP address. Must be accessible via SSH
with configured key-based authentication.
rollback_enabled : bool, optional
Whether to prepare rollback capability (default: True).
Rollback requires additional disk space (~2x application size).
Returns
-------
dict
Deployment result containing:
* ``success`` (bool) -- Whether deployment succeeded
* ``deployment_id`` (str) -- Unique deployment identifier
* ``rollback_id`` (str) -- Rollback identifier (if enabled)
* ``health_status`` (dict) -- Post-deployment health check results
Raises
------
DeploymentError
If deployment fails at any stage
ConnectionError
If target server is unreachable
ValidationError
If app_config contains invalid parameters
Examples
--------
Basic deployment:
>>> config = {
... 'name': 'web-api',
... 'version': '1.2.3',
... 'port': 8080,
... 'health_check_url': '/health'
... }
>>> result = deploy_application(config, '192.168.1.100')
>>> print(result['success'])
True
Deployment without rollback capability:
>>> result = deploy_application(config, '192.168.1.100', rollback_enabled=False)
See Also
--------
rollback_deployment : Rollback to previous version
verify_deployment : Verify deployment health
list_deployments : List deployment history
"""
pass
For teams working with microservices architectures on VPS or dedicated server environments, consistent commenting becomes even more critical. Well-commented code reduces onboarding time, prevents deployment errors, and makes troubleshooting significantly easier when services are distributed across multiple servers.
The key is finding the right balance – enough comments to make your code maintainable without cluttering it with obvious statements. Focus on documenting the complex logic, deployment considerations, and integration points that your future self (or teammates) will thank you for explaining.

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.