
Python unittest – Unit Test Example and Guide
Python’s unittest module is the built-in testing framework that every Python developer should master, providing a structured way to verify code functionality through automated tests. Understanding unit testing isn’t just about catching bugs – it’s about writing maintainable code, improving design decisions, and building confidence in your deployments. This guide will walk you through unittest fundamentals, practical implementation patterns, advanced testing techniques, and real-world scenarios you’ll encounter when testing Python applications on production servers.
How Python unittest Works
The unittest framework follows the xUnit pattern popularized by JUnit, organizing tests into test cases, test suites, and test runners. At its core, unittest uses classes that inherit from unittest.TestCase
, with methods starting with “test_” automatically discovered and executed by the test runner.
Here’s the basic structure:
import unittest
class TestExample(unittest.TestCase):
def setUp(self):
# Run before each test method
pass
def tearDown(self):
# Run after each test method
pass
def test_something(self):
# Your test logic here
self.assertEqual(actual, expected)
if __name__ == '__main__':
unittest.main()
The framework provides several assertion methods for different types of comparisons:
assertEqual(a, b)
– Check if a equals bassertTrue(x)
– Check if x is trueassertRaises(exception, callable)
– Check if callable raises exceptionassertIn(a, b)
– Check if a is in bassertIsNone(x)
– Check if x is None
Step-by-Step Implementation Guide
Let’s build a comprehensive testing setup for a simple calculator class, covering various scenarios you’ll encounter in real applications.
First, create the main module (calculator.py
):
class Calculator:
def __init__(self):
self.history = []
def add(self, a, b):
result = a + b
self.history.append(f"{a} + {b} = {result}")
return result
def divide(self, a, b):
if b == 0:
raise ValueError("Cannot divide by zero")
result = a / b
self.history.append(f"{a} / {b} = {result}")
return result
def get_history(self):
return self.history.copy()
def clear_history(self):
self.history.clear()
Now create the test file (test_calculator.py
):
import unittest
from calculator import Calculator
class TestCalculator(unittest.TestCase):
def setUp(self):
"""Set up test fixtures before each test method."""
self.calc = Calculator()
def tearDown(self):
"""Clean up after each test method."""
self.calc = None
def test_addition_positive_numbers(self):
"""Test addition with positive numbers."""
result = self.calc.add(2, 3)
self.assertEqual(result, 5)
def test_addition_negative_numbers(self):
"""Test addition with negative numbers."""
result = self.calc.add(-2, -3)
self.assertEqual(result, -5)
def test_division_normal_case(self):
"""Test normal division operation."""
result = self.calc.divide(10, 2)
self.assertEqual(result, 5.0)
def test_division_by_zero_raises_exception(self):
"""Test that division by zero raises ValueError."""
with self.assertRaises(ValueError) as context:
self.calc.divide(10, 0)
self.assertEqual(str(context.exception), "Cannot divide by zero")
def test_history_tracking(self):
"""Test that operations are recorded in history."""
self.calc.add(2, 3)
self.calc.divide(10, 2)
history = self.calc.get_history()
self.assertEqual(len(history), 2)
self.assertIn("2 + 3 = 5", history)
self.assertIn("10 / 2 = 5.0", history)
def test_clear_history(self):
"""Test history clearing functionality."""
self.calc.add(1, 1)
self.calc.clear_history()
self.assertEqual(len(self.calc.get_history()), 0)
class TestCalculatorEdgeCases(unittest.TestCase):
"""Separate test class for edge cases."""
def setUp(self):
self.calc = Calculator()
def test_float_precision(self):
"""Test floating point precision handling."""
result = self.calc.add(0.1, 0.2)
self.assertAlmostEqual(result, 0.3, places=7)
def test_large_numbers(self):
"""Test operations with large numbers."""
large_num = 10**100
result = self.calc.add(large_num, 1)
self.assertEqual(result, large_num + 1)
if __name__ == '__main__':
unittest.main(verbosity=2)
Run the tests using different methods:
# Method 1: Direct execution
python test_calculator.py
# Method 2: Using unittest module
python -m unittest test_calculator.py
# Method 3: Discovery (finds all test files)
python -m unittest discover
# Method 4: Specific test case or method
python -m unittest test_calculator.TestCalculator.test_addition_positive_numbers
Advanced Testing Patterns
For more complex scenarios, unittest provides advanced features like test fixtures, mocking, and parameterized tests:
import unittest
from unittest.mock import patch, MagicMock
import tempfile
import os
class TestAdvancedPatterns(unittest.TestCase):
@classmethod
def setUpClass(cls):
"""Run once before all tests in this class."""
cls.temp_dir = tempfile.mkdtemp()
@classmethod
def tearDownClass(cls):
"""Run once after all tests in this class."""
os.rmdir(cls.temp_dir)
def test_with_mock(self):
"""Test using mock objects."""
with patch('requests.get') as mock_get:
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.json.return_value = {'result': 'success'}
mock_get.return_value = mock_response
# Your code that uses requests.get
# result = some_function_that_calls_requests()
# self.assertEqual(result['result'], 'success')
def test_skip_condition(self):
"""Test with conditional skipping."""
if not hasattr(os, 'chmod'):
self.skipTest("chmod not available on this platform")
# Test code here
@unittest.skip("Temporarily disabled")
def test_disabled(self):
"""This test will be skipped."""
pass
def test_multiple_assertions(self):
"""Test with multiple assertions using subTest."""
test_cases = [
(2, 3, 5),
(1, 1, 2),
(-1, 1, 0),
]
calc = Calculator()
for a, b, expected in test_cases:
with self.subTest(a=a, b=b, expected=expected):
result = calc.add(a, b)
self.assertEqual(result, expected)
Real-World Use Cases and Examples
Here are practical scenarios you’ll encounter when testing applications deployed on VPS or dedicated servers:
Testing Database Operations
import unittest
import sqlite3
import tempfile
import os
class TestDatabaseOperations(unittest.TestCase):
def setUp(self):
self.test_db = tempfile.NamedTemporaryFile(delete=False)
self.test_db.close()
self.conn = sqlite3.connect(self.test_db.name)
self.cursor = self.conn.cursor()
# Create test table
self.cursor.execute('''
CREATE TABLE users (
id INTEGER PRIMARY KEY,
username TEXT UNIQUE,
email TEXT
)
''')
self.conn.commit()
def tearDown(self):
self.conn.close()
os.unlink(self.test_db.name)
def test_user_creation(self):
"""Test user creation in database."""
self.cursor.execute(
"INSERT INTO users (username, email) VALUES (?, ?)",
("testuser", "test@example.com")
)
self.conn.commit()
self.cursor.execute("SELECT * FROM users WHERE username = ?", ("testuser",))
user = self.cursor.fetchone()
self.assertIsNotNone(user)
self.assertEqual(user[1], "testuser")
self.assertEqual(user[2], "test@example.com")
def test_duplicate_username_handling(self):
"""Test handling of duplicate usernames."""
self.cursor.execute(
"INSERT INTO users (username, email) VALUES (?, ?)",
("duplicate", "first@example.com")
)
self.conn.commit()
with self.assertRaises(sqlite3.IntegrityError):
self.cursor.execute(
"INSERT INTO users (username, email) VALUES (?, ?)",
("duplicate", "second@example.com")
)
self.conn.commit()
Testing File Operations
import unittest
import tempfile
import os
import json
class TestFileOperations(unittest.TestCase):
def setUp(self):
self.temp_dir = tempfile.mkdtemp()
self.config_file = os.path.join(self.temp_dir, "config.json")
def tearDown(self):
import shutil
shutil.rmtree(self.temp_dir)
def test_config_file_creation(self):
"""Test creating and reading configuration files."""
config_data = {
"database_url": "sqlite:///test.db",
"debug": True,
"port": 8080
}
# Write config file
with open(self.config_file, 'w') as f:
json.dump(config_data, f)
# Verify file exists and has correct content
self.assertTrue(os.path.exists(self.config_file))
with open(self.config_file, 'r') as f:
loaded_config = json.load(f)
self.assertEqual(loaded_config, config_data)
self.assertEqual(loaded_config["port"], 8080)
def test_file_permissions(self):
"""Test file permission handling."""
test_file = os.path.join(self.temp_dir, "test.txt")
with open(test_file, 'w') as f:
f.write("test content")
# Test readable
self.assertTrue(os.access(test_file, os.R_OK))
# Change permissions and test
os.chmod(test_file, 0o000)
self.assertFalse(os.access(test_file, os.R_OK))
Testing Web Applications
For web applications, you’ll often need to test HTTP endpoints and API responses:
import unittest
from unittest.mock import patch, MagicMock
import json
class TestAPIEndpoints(unittest.TestCase):
def setUp(self):
# Assuming you have a Flask app
# self.app = create_app(testing=True)
# self.client = self.app.test_client()
pass
@patch('requests.post')
def test_external_api_call(self, mock_post):
"""Test making calls to external APIs."""
# Mock the external API response
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.json.return_value = {
'status': 'success',
'data': {'id': 123, 'name': 'Test Item'}
}
mock_post.return_value = mock_response
# Your function that makes the API call
# result = make_external_api_call({'name': 'Test Item'})
# Verify the call was made correctly
# mock_post.assert_called_once()
# self.assertEqual(result['status'], 'success')
def test_input_validation(self):
"""Test input validation for API endpoints."""
invalid_inputs = [
"", # Empty string
None, # None value
"a" * 1000, # Too long
{"invalid": "structure"}, # Wrong structure
]
for invalid_input in invalid_inputs:
with self.subTest(input=invalid_input):
# Test that validation properly rejects invalid input
# result = validate_input(invalid_input)
# self.assertFalse(result.is_valid)
pass
Framework Comparisons
Here’s how unittest compares to other Python testing frameworks:
Feature | unittest | pytest | nose2 |
---|---|---|---|
Built-in | Yes | No (pip install) | No (pip install) |
Test Discovery | Basic | Advanced | Good |
Fixtures | setUp/tearDown | Flexible decorators | setUp/tearDown |
Parameterized Tests | subTest only | Built-in | Plugins |
Assertion Style | self.assert* | Plain assert | self.assert* |
Plugin Ecosystem | Limited | Extensive | Moderate |
Learning Curve | Moderate | Easy | Moderate |
Best Practices and Common Pitfalls
Follow these practices to write maintainable and reliable tests:
Best Practices
- Test Independence: Each test should be able to run independently without relying on other tests
- Clear Test Names: Use descriptive names that explain what the test verifies
- Single Responsibility: Each test should verify one specific behavior
- Arrange-Act-Assert Pattern: Structure tests with clear setup, execution, and verification phases
- Use setUp and tearDown: Properly initialize and clean up test resources
# Good test structure
def test_user_login_with_valid_credentials_returns_success(self):
# Arrange
username = "validuser"
password = "validpass"
user = User(username, password)
# Act
result = user.login(username, password)
# Assert
self.assertTrue(result.success)
self.assertEqual(result.message, "Login successful")
Common Pitfalls to Avoid
- Testing Implementation Details: Focus on behavior, not internal implementation
- Brittle Tests: Avoid tests that break when unrelated code changes
- Slow Tests: Keep unit tests fast by avoiding I/O operations when possible
- Poor Error Messages: Use custom messages in assertions for better debugging
# Bad: Testing implementation details
def test_internal_cache_structure(self):
cache = Cache()
cache.set("key", "value")
self.assertEqual(cache._internal_dict["key"], "value") # Don't test private members
# Good: Testing behavior
def test_cache_stores_and_retrieves_values(self):
cache = Cache()
cache.set("key", "value")
retrieved_value = cache.get("key")
self.assertEqual(retrieved_value, "value", "Cache should return the stored value")
Performance and Test Organization
For larger projects, organize tests efficiently and monitor performance:
# test_suite.py - Custom test suite
import unittest
from test_calculator import TestCalculator
from test_database import TestDatabaseOperations
from test_api import TestAPIEndpoints
def create_test_suite():
"""Create a custom test suite with specific tests."""
suite = unittest.TestSuite()
# Add specific test methods
suite.addTest(TestCalculator('test_addition_positive_numbers'))
suite.addTest(TestDatabaseOperations('test_user_creation'))
# Add entire test classes
suite.addTests(unittest.TestLoader().loadTestsFromTestCase(TestAPIEndpoints))
return suite
def run_performance_tests():
"""Run tests with timing information."""
import time
class TimedTestResult(unittest.TextTestResult):
def startTest(self, test):
self.start_time = time.time()
super().startTest(test)
def stopTest(self, test):
elapsed = time.time() - self.start_time
super().stopTest(test)
print(f"{test}: {elapsed:.3f}s")
runner = unittest.TextTestRunner(resultclass=TimedTestResult, verbosity=2)
suite = create_test_suite()
runner.run(suite)
if __name__ == '__main__':
run_performance_tests()
Integration with CI/CD and Server Deployment
When deploying to production servers, integrate unittest with your deployment pipeline:
# test_runner.py - Script for CI/CD integration
import unittest
import sys
import os
import json
from io import StringIO
class JSONTestResult(unittest.TextTestResult):
"""Custom test result class that outputs JSON for CI/CD systems."""
def __init__(self, stream, descriptions, verbosity):
super().__init__(stream, descriptions, verbosity)
self.test_results = []
def addSuccess(self, test):
super().addSuccess(test)
self.test_results.append({
'test': str(test),
'status': 'PASS',
'message': None
})
def addError(self, test, err):
super().addError(test, err)
self.test_results.append({
'test': str(test),
'status': 'ERROR',
'message': str(err[1])
})
def addFailure(self, test, err):
super().addFailure(test, err)
self.test_results.append({
'test': str(test),
'status': 'FAIL',
'message': str(err[1])
})
def get_results_json(self):
return json.dumps({
'total_tests': self.testsRun,
'failures': len(self.failures),
'errors': len(self.errors),
'success_rate': (self.testsRun - len(self.failures) - len(self.errors)) / self.testsRun * 100,
'results': self.test_results
}, indent=2)
def run_tests_for_ci():
"""Run tests and output results in CI-friendly format."""
loader = unittest.TestLoader()
suite = loader.discover('tests', pattern='test_*.py')
stream = StringIO()
runner = unittest.TextTestRunner(
stream=stream,
resultclass=JSONTestResult,
verbosity=2
)
result = runner.run(suite)
# Output JSON results for CI system
print(result.get_results_json())
# Exit with appropriate code
sys.exit(0 if result.wasSuccessful() else 1)
if __name__ == '__main__':
if '--ci' in sys.argv:
run_tests_for_ci()
else:
unittest.main()
Troubleshooting Common Issues
Here are solutions to frequent unittest problems:
Import Path Issues
# If tests can't find your modules, add this to test files:
import sys
import os
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
# Or use relative imports
from ..mymodule import MyClass
Test Discovery Problems
# Ensure your test directory structure looks like this:
project/
├── __init__.py
├── mymodule.py
└── tests/
├── __init__.py
├── test_mymodule.py
└── test_integration.py
# Run discovery from project root:
python -m unittest discover -s tests -p "test_*.py"
Resource Cleanup Issues
import unittest
import tempfile
import shutil
class TestWithProperCleanup(unittest.TestCase):
def setUp(self):
self.temp_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, self.temp_dir) # Ensures cleanup even if test fails
def test_file_operations(self):
# Even if this test raises an exception,
# temp_dir will be cleaned up
test_file = os.path.join(self.temp_dir, "test.txt")
with open(test_file, 'w') as f:
f.write("test data")
# ... rest of test
Python’s unittest module provides a solid foundation for testing applications, from simple scripts to complex web services deployed on production servers. While frameworks like pytest offer more modern features, unittest’s inclusion in the standard library and comprehensive feature set make it an excellent choice for most projects. The key to successful unit testing lies in writing focused, independent tests that verify behavior rather than implementation details, combined with proper organization and integration into your development workflow.
For additional resources, check the official unittest documentation and consider exploring unittest.mock for advanced mocking capabilities.

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.