Python pytest Markers: Test Selection and Skipping
Markers are pytest's mechanism for attaching metadata to your tests. Think of them as labels you can apply to test functions or classes, then use to control which tests run and how they behave.
Key Insights
- pytest markers let you categorize tests and control execution flow, enabling you to skip platform-specific tests, mark known failures, and run targeted test subsets without modifying test code.
- Custom markers combined with the
-mflag create a powerful filtering system—run only your fast unit tests during development, then execute the full integration suite in CI. - Always register custom markers in your configuration file and use
--strict-markersto catch typos that would otherwise silently create new, unused markers.
Introduction to pytest Markers
Markers are pytest’s mechanism for attaching metadata to your tests. Think of them as labels you can apply to test functions or classes, then use to control which tests run and how they behave.
pytest ships with several built-in markers for common scenarios: skipping tests, marking expected failures, and parameterizing test data. You can also define custom markers to categorize tests by speed, type, or any other dimension that makes sense for your project.
The real power comes from combining markers with pytest’s selection system. Instead of commenting out tests or restructuring your test files, you tag tests appropriately and let the command line decide what runs.
Built-in Skip and Skipif Markers
The @pytest.mark.skip marker unconditionally skips a test. Use it when a test is temporarily broken or depends on functionality you haven’t implemented yet.
import pytest
import sys
@pytest.mark.skip(reason="Waiting for API v2 deployment")
def test_new_api_endpoint():
response = client.get("/api/v2/users")
assert response.status_code == 200
The reason parameter is optional but strongly recommended. When tests get skipped, you want future-you (or your teammates) to understand why.
@pytest.mark.skipif adds conditional logic. The test runs only when the condition evaluates to False:
import pytest
import sys
import platform
@pytest.mark.skipif(
sys.version_info < (3, 10),
reason="Requires Python 3.10+ for match statement"
)
def test_pattern_matching():
result = parse_command("create user john")
match result:
case {"action": "create", "type": "user", "name": name}:
assert name == "john"
case _:
pytest.fail("Pattern did not match")
@pytest.mark.skipif(
platform.system() == "Windows",
reason="Unix-specific file permissions test"
)
def test_file_permissions():
import os
import stat
os.chmod("/tmp/testfile", stat.S_IRUSR)
assert not os.access("/tmp/testfile", os.W_OK)
@pytest.mark.skipif(
not shutil.which("docker"),
reason="Docker CLI not available"
)
def test_container_deployment():
result = deploy_to_container("myapp:latest")
assert result.exit_code == 0
You can also skip imperatively within a test using pytest.skip():
def test_database_migration():
if not database_is_available():
pytest.skip("Database not available for testing")
run_migration()
assert get_schema_version() == 42
The xfail Marker for Expected Failures
@pytest.mark.xfail marks a test that you expect to fail. This is different from skipping—the test still runs, but a failure doesn’t break your build.
Common use cases include tests for known bugs, tests for unimplemented features, and tests that fail on specific platforms:
import pytest
@pytest.mark.xfail(reason="Bug #1234: Race condition in cache invalidation")
def test_concurrent_cache_update():
cache = SharedCache()
results = run_concurrent_updates(cache, num_threads=10)
assert all(r.success for r in results)
@pytest.mark.xfail(
raises=ZeroDivisionError,
reason="Known issue with empty datasets"
)
def test_calculate_average_empty_list():
result = calculate_average([])
assert result == 0 # Should handle empty list gracefully
The raises parameter specifies which exception constitutes an expected failure. If the test fails with a different exception, pytest treats it as an actual failure.
The strict parameter changes behavior when an xfail test unexpectedly passes:
@pytest.mark.xfail(strict=True, reason="Should fail until fix is deployed")
def test_known_broken_feature():
# If this passes, the test suite fails—alerting you that
# the bug might be fixed and the marker should be removed
assert broken_function() == expected_value
With strict=True, a passing test becomes a failure. This prevents “expected failures” from silently becoming passing tests that nobody remembers to un-mark.
Creating and Using Custom Markers
Custom markers let you categorize tests by any dimension relevant to your project. Define them in pytest.ini or pyproject.toml:
# pytest.ini
[pytest]
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
integration: marks tests requiring external services
database: marks tests that need database access
smoke: critical path tests for deployment verification
Or in pyproject.toml:
# pyproject.toml
[tool.pytest.ini_options]
markers = [
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
"integration: marks tests requiring external services",
"database: marks tests that need database access",
"smoke: critical path tests for deployment verification",
]
Apply markers to your tests:
import pytest
@pytest.mark.slow
def test_full_data_export():
"""Takes 30+ seconds to export complete dataset."""
result = export_all_records(format="csv")
assert len(result.rows) > 100000
@pytest.mark.integration
@pytest.mark.database
def test_user_creation_flow():
"""Creates user in database and sends welcome email."""
user = create_user("test@example.com")
assert user.id is not None
assert email_was_sent(to="test@example.com")
@pytest.mark.smoke
def test_health_endpoint():
"""Critical: Verify application is responding."""
response = client.get("/health")
assert response.status_code == 200
You can apply markers to entire test classes:
@pytest.mark.integration
class TestPaymentProcessing:
"""All tests in this class require payment gateway access."""
def test_charge_card(self):
result = process_payment(amount=100, card="4111111111111111")
assert result.success
def test_refund(self):
result = process_refund(transaction_id="txn_123")
assert result.success
@pytest.mark.slow
def test_batch_settlement(self):
"""This specific test is also slow."""
result = settle_daily_transactions()
assert result.settled_count > 0
Selecting Tests with Marker Expressions
The -m flag accepts boolean expressions for precise test selection:
# Run only slow tests
pytest -m slow
# Run everything except slow tests
pytest -m "not slow"
# Run integration tests that aren't slow
pytest -m "integration and not slow"
# Run tests marked as either smoke or database
pytest -m "smoke or database"
# Complex expression: smoke tests, or fast integration tests
pytest -m "smoke or (integration and not slow)"
Combine marker selection with other pytest options:
# Run fast integration tests with verbose output
pytest -m "integration and not slow" -v
# Run smoke tests and stop on first failure
pytest -m smoke -x
# Run database tests with coverage
pytest -m database --cov=myapp
This selection mechanism integrates naturally with CI/CD pipelines:
# .github/workflows/test.yml
jobs:
fast-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run fast tests
run: pytest -m "not slow and not integration"
integration-tests:
runs-on: ubuntu-latest
needs: fast-tests
steps:
- uses: actions/checkout@v4
- name: Run integration tests
run: pytest -m integration
Parametrize Marker for Data-Driven Tests
@pytest.mark.parametrize runs a single test function with multiple input sets:
import pytest
@pytest.mark.parametrize("email,is_valid", [
("user@example.com", True),
("user@subdomain.example.com", True),
("user+tag@example.com", True),
("invalid-email", False),
("@example.com", False),
("user@", False),
("", False),
])
def test_email_validation(email, is_valid):
assert validate_email(email) == is_valid
Combine parametrize with other markers:
@pytest.mark.parametrize("input_size", [100, 1000, 10000])
@pytest.mark.slow
def test_sorting_performance(input_size):
data = generate_random_list(input_size)
sorted_data = custom_sort(data)
assert sorted_data == sorted(data)
@pytest.mark.parametrize("endpoint,expected_status", [
("/health", 200),
("/api/users", 200),
pytest.param("/admin", 200, marks=pytest.mark.xfail(reason="Admin disabled")),
("/nonexistent", 404),
])
def test_endpoint_status(endpoint, expected_status):
response = client.get(endpoint)
assert response.status_code == expected_status
The pytest.param wrapper lets you apply markers to specific parameter combinations.
Best Practices and Common Patterns
Always use strict markers. Add this to your configuration:
# pytest.ini
[pytest]
addopts = --strict-markers
markers =
slow: marks tests as slow
integration: marks tests requiring external services
With --strict-markers, pytest errors on unregistered markers. This catches typos like @pytest.mark.slo before they silently create useless markers.
Create a marker hierarchy that matches your workflow. Think about how you actually run tests:
# conftest.py
import pytest
def pytest_configure(config):
config.addinivalue_line("markers", "unit: fast, isolated unit tests")
config.addinivalue_line("markers", "integration: tests with external dependencies")
config.addinivalue_line("markers", "e2e: end-to-end browser tests")
config.addinivalue_line("markers", "slow: tests taking more than 5 seconds")
Use fixtures alongside markers for setup control:
# conftest.py
import pytest
@pytest.fixture
def database_connection(request):
if "database" not in [m.name for m in request.node.iter_markers()]:
pytest.skip("Test not marked for database access")
conn = create_database_connection()
yield conn
conn.close()
Document your markers in your README or contributing guide. Future contributors need to know which markers exist and when to use them.
Markers transform pytest from a simple test runner into a flexible test orchestration system. Use them to match your testing strategy to your development workflow—fast feedback during development, comprehensive coverage in CI.