Python pytest Plugins: Extending pytest
pytest's power comes from its extensibility. Nearly every aspect of how pytest discovers, collects, runs, and reports tests can be modified through plugins. This isn't an afterthought—it's the...
Key Insights
- pytest’s hook-based architecture lets you intercept and modify nearly every aspect of test collection, execution, and reporting without touching pytest’s source code
- Local plugins via
conftest.pyprovide immediate extensibility for project-specific needs, while packaged plugins enable reuse across projects and teams - The
pytesterfixture is essential for testing plugins—it creates isolated pytest runs that verify your hooks and fixtures behave correctly
Introduction to pytest’s Plugin Architecture
pytest’s power comes from its extensibility. Nearly every aspect of how pytest discovers, collects, runs, and reports tests can be modified through plugins. This isn’t an afterthought—it’s the foundation of pytest’s design.
The plugin system uses a hook-based architecture. pytest defines “hook specifications” (function signatures that describe extension points), and plugins provide “hook implementations” (functions that execute at those extension points). When pytest reaches a hook point, it calls all registered implementations in a defined order.
Plugin discovery happens automatically through several mechanisms:
- Built-in plugins: Core pytest functionality implemented as internal plugins
- conftest.py files: Local plugins discovered in test directories
- Installed packages: Any package with a
pytest11entry point - Command-line plugins: Loaded via
-pflag
This layered approach means you can start simple with conftest.py and graduate to packaged plugins when needed.
Understanding pytest Hooks
pytest exposes dozens of hooks covering the entire test lifecycle. Here are the ones you’ll use most often:
pytest_configure(config): Called after command-line options are parsed, before test collectionpytest_collection_modifyitems(config, items): Called after collection, lets you filter or reorder testspytest_runtest_setup/call/teardown(item): Called around each test phasepytest_runtest_makereport(item, call): Called to create test reportspytest_terminal_summary(terminalreporter): Add custom output at the end of a test run
Hook specifications live in pytest.hookspec. Your implementations are regular functions decorated with @pytest.hookimpl (optional but recommended for clarity).
Here’s a practical example—a hook that logs test execution times:
# conftest.py
import time
import pytest
test_times = {}
@pytest.hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
"""Record test start time."""
test_times[item.nodeid] = {"start": time.time()}
@pytest.hookimpl(trylast=True)
def pytest_runtest_teardown(item):
"""Calculate and store test duration."""
if item.nodeid in test_times:
test_times[item.nodeid]["duration"] = (
time.time() - test_times[item.nodeid]["start"]
)
def pytest_terminal_summary(terminalreporter):
"""Print slowest tests at the end of the run."""
terminalreporter.write_sep("=", "Slowest Tests")
sorted_tests = sorted(
test_times.items(),
key=lambda x: x[1].get("duration", 0),
reverse=True
)
for nodeid, data in sorted_tests[:5]:
duration = data.get("duration", 0)
terminalreporter.write_line(f"{duration:.3f}s {nodeid}")
The tryfirst and trylast decorators control hook execution order. Use tryfirst when you need to run before other plugins (like capturing state before modifications) and trylast when you need the final say.
Creating Your First Local Plugin (conftest.py)
conftest.py is pytest’s mechanism for local plugins. Any hooks or fixtures defined here apply to tests in the same directory and subdirectories. No installation required—pytest discovers these files automatically.
This makes conftest.py perfect for project-specific customizations. Here’s a practical example that skips tests based on environment variables and provides environment-aware fixtures:
# conftest.py
import os
import pytest
def pytest_configure(config):
"""Register custom markers."""
config.addinivalue_line(
"markers",
"requires_env(name): skip test if environment variable is not set"
)
config.addinivalue_line(
"markers",
"production_only: skip test unless ENVIRONMENT=production"
)
def pytest_collection_modifyitems(config, items):
"""Skip tests based on environment requirements."""
for item in items:
# Handle @pytest.mark.requires_env("VAR_NAME")
for marker in item.iter_markers(name="requires_env"):
env_var = marker.args[0]
if not os.environ.get(env_var):
item.add_marker(
pytest.mark.skip(
reason=f"Requires {env_var} environment variable"
)
)
# Handle @pytest.mark.production_only
if item.get_closest_marker("production_only"):
if os.environ.get("ENVIRONMENT") != "production":
item.add_marker(
pytest.mark.skip(reason="Only runs in production environment")
)
@pytest.fixture
def api_client():
"""Provide an API client configured for the current environment."""
base_url = os.environ.get("API_URL", "http://localhost:8000")
# In a real implementation, this would return an actual client
return {"base_url": base_url, "environment": os.environ.get("ENVIRONMENT", "local")}
@pytest.fixture
def db_connection(request):
"""Provide a database connection, with automatic cleanup."""
env = os.environ.get("ENVIRONMENT", "test")
connection_string = os.environ.get(
"DATABASE_URL",
f"postgresql://localhost/myapp_{env}"
)
# Simulated connection setup
connection = {"url": connection_string, "connected": True}
yield connection
# Cleanup
connection["connected"] = False
Tests can now use these markers and fixtures:
# test_api.py
import pytest
@pytest.mark.requires_env("API_KEY")
def test_authenticated_endpoint(api_client):
assert api_client["base_url"] is not None
@pytest.mark.production_only
def test_production_metrics(db_connection):
assert db_connection["connected"]
Building a Packaged Plugin
When your plugin needs to be shared across projects, package it. The convention is to name packages pytest-* (e.g., pytest-timeout, pytest-cov).
Here’s a complete plugin structure:
pytest-slowmarker/
├── pyproject.toml
├── README.md
├── src/
│ └── pytest_slowmarker/
│ ├── __init__.py
│ └── plugin.py
└── tests/
├── conftest.py
└── test_plugin.py
The pyproject.toml registers your plugin with pytest:
[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "pytest-slowmarker"
version = "0.1.0"
description = "A pytest plugin for marking and filtering slow tests"
readme = "README.md"
requires-python = ">=3.8"
dependencies = ["pytest>=7.0.0"]
[project.entry-points.pytest11]
slowmarker = "pytest_slowmarker.plugin"
[project.optional-dependencies]
dev = ["pytest>=7.0.0"]
[tool.setuptools.packages.find]
where = ["src"]
The critical part is [project.entry-points.pytest11]. This tells pytest to load your plugin module. The key (slowmarker) is an identifier; the value is the import path to your plugin module.
Adding Custom Command-Line Options and Markers
A well-designed plugin often needs both CLI options and markers. Here’s the complete plugin implementation:
# src/pytest_slowmarker/plugin.py
import pytest
import time
def pytest_addoption(parser):
"""Add command-line options."""
group = parser.getgroup("slowmarker")
group.addoption(
"--slow",
action="store_true",
default=False,
help="Run tests marked as slow"
)
group.addoption(
"--slow-threshold",
action="store",
type=float,
default=1.0,
help="Threshold in seconds for auto-marking slow tests (default: 1.0)"
)
def pytest_configure(config):
"""Register the slow marker."""
config.addinivalue_line(
"markers",
"slow: mark test as slow (deselected by default, use --slow to run)"
)
def pytest_collection_modifyitems(config, items):
"""Skip slow tests unless --slow is passed."""
if config.getoption("--slow"):
# --slow given: don't skip slow tests
return
skip_slow = pytest.mark.skip(reason="Slow test (use --slow to run)")
for item in items:
if item.get_closest_marker("slow"):
item.add_marker(skip_slow)
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
"""Report tests that exceed the slow threshold."""
outcome = yield
report = outcome.get_result()
if report.when == "call" and report.passed:
threshold = item.config.getoption("--slow-threshold")
if report.duration >= threshold:
# Add a warning to the report
if not hasattr(report, "warnings"):
report.warnings = []
report.warnings.append(
f"Test exceeded slow threshold ({report.duration:.2f}s >= {threshold}s). "
f"Consider adding @pytest.mark.slow"
)
def pytest_terminal_summary(terminalreporter, config):
"""Summarize slow test warnings."""
threshold = config.getoption("--slow-threshold")
slow_tests = []
for report in terminalreporter.stats.get("passed", []):
if hasattr(report, "warnings"):
slow_tests.extend(
(report.nodeid, w) for w in report.warnings
)
if slow_tests:
terminalreporter.write_sep("=", "Slow Test Warnings")
for nodeid, warning in slow_tests:
terminalreporter.write_line(f" {nodeid}")
terminalreporter.write_line(f" {warning}")
The hookwrapper=True parameter is crucial here. It lets your hook wrap around other implementations, giving you access to both the input and the result.
Testing Your Plugin
pytest provides the pytester fixture specifically for testing plugins. It creates isolated test directories and runs pytest in a subprocess, capturing results for assertions.
# tests/conftest.py
pytest_plugins = ["pytester"]
# tests/test_plugin.py
def test_slow_marker_skips_by_default(pytester):
"""Verify slow tests are skipped without --slow flag."""
pytester.makepyfile("""
import pytest
import time
@pytest.mark.slow
def test_slow():
time.sleep(0.1)
assert True
def test_fast():
assert True
""")
result = pytester.runpytest("-v")
result.assert_outcomes(passed=1, skipped=1)
assert "Slow test" in result.stdout.str()
def test_slow_flag_runs_slow_tests(pytester):
"""Verify --slow flag runs slow tests."""
pytester.makepyfile("""
import pytest
@pytest.mark.slow
def test_slow():
assert True
""")
result = pytester.runpytest("--slow", "-v")
result.assert_outcomes(passed=1)
def test_threshold_warning(pytester):
"""Verify tests exceeding threshold generate warnings."""
pytester.makepyfile("""
import time
def test_exceeds_threshold():
time.sleep(0.2)
assert True
""")
result = pytester.runpytest("--slow-threshold=0.1", "-v")
result.assert_outcomes(passed=1)
assert "slow threshold" in result.stdout.str().lower()
def test_custom_threshold(pytester):
"""Verify custom threshold is respected."""
pytester.makepyfile("""
import time
def test_under_custom_threshold():
time.sleep(0.1)
assert True
""")
result = pytester.runpytest("--slow-threshold=1.0", "-v")
result.assert_outcomes(passed=1)
# Should not trigger warning with high threshold
assert "slow threshold" not in result.stdout.str().lower()
The pytester fixture handles test isolation, temporary directories, and subprocess management. Use makepyfile to create test files, runpytest to execute tests, and assert_outcomes to verify results.
Publishing and Distribution
Before publishing to PyPI, ensure your plugin meets community standards:
- Documentation: Include a clear README with installation instructions, usage examples, and configuration options
- Testing: Aim for high test coverage using
pytester - Compatibility: Test against multiple pytest and Python versions using tox or nox
- Licensing: Include a LICENSE file (MIT is common for pytest plugins)
Build and publish:
# Install build tools
pip install build twine
# Build distribution packages
python -m build
# Upload to PyPI (you'll need an account and API token)
twine upload dist/*
For visibility, consider adding your plugin to the pytest plugin list and tagging your repository with pytest-plugin.
pytest’s plugin ecosystem is one of its greatest strengths. Whether you’re building internal tools or contributing to the community, understanding hooks and plugin architecture opens up powerful testing capabilities. Start with conftest.py for quick wins, then package and share when your plugin matures.