Python pytest: Complete Testing Framework Guide

pytest has become the de facto testing framework for Python projects, and for good reason. While unittest ships with the standard library, pytest offers a dramatically better developer experience...

Key Insights

  • pytest’s fixture system and dependency injection eliminate the boilerplate that makes unittest tests painful to write and maintain
  • Parametrization transforms repetitive test cases into concise, data-driven tests that are easier to extend and debug
  • Strategic use of markers and configuration enables fast feedback loops during development while maintaining comprehensive CI coverage

Introduction to pytest

pytest has become the de facto testing framework for Python projects, and for good reason. While unittest ships with the standard library, pytest offers a dramatically better developer experience with minimal boilerplate, powerful assertion introspection, and a plugin ecosystem that handles nearly every testing scenario you’ll encounter.

The key advantages are practical: you write plain functions instead of classes, use standard assert statements instead of assertEqual/assertTrue methods, and get detailed failure output without any extra work.

Install pytest and get started:

pip install pytest

Create your first test file:

# test_calculator.py
def add(a, b):
    return a + b

def test_add_positive_numbers():
    assert add(2, 3) == 5

def test_add_negative_numbers():
    assert add(-1, -1) == -2

def test_add_mixed_numbers():
    result = add(-1, 5)
    assert result == 4
    assert isinstance(result, int)

Run it with pytest test_calculator.py. When a test fails, pytest shows you exactly what went wrong—the values on both sides of the comparison, the line that failed, and the full context. No more squinting at “AssertionError” with no details.

Writing Effective Tests

pytest discovers tests automatically based on naming conventions. Files must start with test_ or end with _test.py. Test functions must start with test_. Classes must start with Test and cannot have an __init__ method.

Structure your test directory to mirror your source code:

project/
├── src/
│   └── myapp/
│       ├── models.py
│       └── services.py
└── tests/
    ├── conftest.py
    ├── test_models.py
    └── test_services.py

Here’s a realistic example testing a user model:

# src/myapp/models.py
class User:
    def __init__(self, email, name):
        if not email or '@' not in email:
            raise ValueError("Invalid email address")
        self.email = email.lower()
        self.name = name
        self.active = True

    def deactivate(self):
        self.active = False

    def full_display(self):
        status = "active" if self.active else "inactive"
        return f"{self.name} <{self.email}> ({status})"


# tests/test_models.py
import pytest
from myapp.models import User

class TestUser:
    def test_creates_user_with_valid_email(self):
        user = User("John@Example.com", "John Doe")
        assert user.email == "john@example.com"  # normalized to lowercase
        assert user.name == "John Doe"
        assert user.active is True

    def test_rejects_invalid_email(self):
        with pytest.raises(ValueError, match="Invalid email"):
            User("not-an-email", "John Doe")

    def test_rejects_empty_email(self):
        with pytest.raises(ValueError):
            User("", "John Doe")

    def test_deactivate_sets_active_false(self):
        user = User("test@example.com", "Test User")
        user.deactivate()
        assert user.active is False

    def test_full_display_shows_status(self):
        user = User("test@example.com", "Test User")
        assert user.full_display() == "Test User <test@example.com> (active)"
        
        user.deactivate()
        assert user.full_display() == "Test User <test@example.com> (inactive)"

Fixtures: Setup and Teardown

Fixtures are pytest’s killer feature. They provide dependency injection for test setup, replacing the clunky setUp/tearDown methods from unittest. Define a fixture once, and any test that needs it simply declares it as a parameter.

Fixtures have scopes that control their lifecycle: function (default, runs for each test), class, module, and session (runs once for the entire test run).

# tests/conftest.py
import pytest
import sqlite3
from pathlib import Path

@pytest.fixture
def sample_user():
    """Fresh user instance for each test."""
    return User("test@example.com", "Test User")

@pytest.fixture(scope="module")
def db_connection(tmp_path_factory):
    """Database connection shared across all tests in a module."""
    db_path = tmp_path_factory.mktemp("data") / "test.db"
    conn = sqlite3.connect(db_path)
    conn.execute("""
        CREATE TABLE users (
            id INTEGER PRIMARY KEY,
            email TEXT UNIQUE,
            name TEXT
        )
    """)
    conn.commit()
    yield conn  # yield instead of return enables teardown
    conn.close()

@pytest.fixture
def db_with_users(db_connection):
    """Database pre-populated with test users."""
    cursor = db_connection.cursor()
    cursor.executemany(
        "INSERT INTO users (email, name) VALUES (?, ?)",
        [
            ("alice@example.com", "Alice"),
            ("bob@example.com", "Bob"),
        ]
    )
    db_connection.commit()
    yield db_connection
    # Clean up after test
    cursor.execute("DELETE FROM users")
    db_connection.commit()


# tests/test_database.py
def test_insert_user(db_connection):
    cursor = db_connection.cursor()
    cursor.execute(
        "INSERT INTO users (email, name) VALUES (?, ?)",
        ("new@example.com", "New User")
    )
    db_connection.commit()
    
    cursor.execute("SELECT COUNT(*) FROM users")
    assert cursor.fetchone()[0] == 1

def test_query_existing_users(db_with_users):
    cursor = db_with_users.cursor()
    cursor.execute("SELECT name FROM users ORDER BY name")
    names = [row[0] for row in cursor.fetchall()]
    assert names == ["Alice", "Bob"]

The built-in tmp_path fixture provides a temporary directory unique to each test—perfect for file operations without cleanup headaches. capsys captures stdout/stderr, and monkeypatch lets you modify objects and environment variables safely.

Parametrization and Test Data

When you need to test the same logic with different inputs, parametrization eliminates copy-paste test methods:

import pytest

def validate_password(password):
    """Returns (is_valid, error_message)."""
    if len(password) < 8:
        return False, "Password must be at least 8 characters"
    if not any(c.isupper() for c in password):
        return False, "Password must contain uppercase letter"
    if not any(c.isdigit() for c in password):
        return False, "Password must contain a digit"
    return True, None


@pytest.mark.parametrize("password,expected_valid,expected_error", [
    # Valid passwords
    ("SecurePass1", True, None),
    ("MyP@ssw0rd", True, None),
    
    # Too short
    ("Short1", False, "Password must be at least 8 characters"),
    
    # Missing uppercase
    ("lowercase123", False, "Password must contain uppercase letter"),
    
    # Missing digit
    ("NoDigitsHere", False, "Password must contain a digit"),
])
def test_validate_password(password, expected_valid, expected_error):
    is_valid, error = validate_password(password)
    assert is_valid == expected_valid
    assert error == expected_error


# Combining multiple parameter sets
@pytest.mark.parametrize("username", ["alice", "bob", "charlie"])
@pytest.mark.parametrize("role", ["admin", "user", "guest"])
def test_user_role_combinations(username, role):
    """Runs 9 times (3 users × 3 roles)."""
    user = create_user(username, role)
    assert user.has_role(role)

Markers and Test Selection

Markers let you categorize tests and control which ones run:

import pytest
import sys

@pytest.mark.slow
def test_large_data_processing():
    """Takes 30+ seconds to run."""
    process_million_records()
    assert True

@pytest.mark.integration
def test_external_api_connection():
    """Requires network access."""
    response = call_external_api()
    assert response.status_code == 200

@pytest.mark.skipif(
    sys.platform == "win32",
    reason="Unix-specific functionality"
)
def test_unix_permissions():
    import os
    os.chmod("/tmp/testfile", 0o755)

@pytest.mark.xfail(reason="Known bug in payment processing")
def test_refund_calculation():
    """Expected to fail until bug #1234 is fixed."""
    assert calculate_refund(100) == 95

Register custom markers in your configuration to avoid warnings, then run specific subsets:

pytest -m "not slow"           # Skip slow tests
pytest -m "integration"        # Only integration tests
pytest -k "test_user"          # Tests matching pattern
pytest --lf                    # Re-run last failed tests

Mocking and Patching

The monkeypatch fixture handles most mocking needs without importing anything:

import pytest
import requests

class WeatherClient:
    def __init__(self, api_key):
        self.api_key = api_key
        self.base_url = "https://api.weather.com"

    def get_temperature(self, city):
        response = requests.get(
            f"{self.base_url}/current",
            params={"city": city, "key": self.api_key}
        )
        response.raise_for_status()
        return response.json()["temperature"]


def test_get_temperature_success(monkeypatch):
    """Mock the requests.get call."""
    
    class MockResponse:
        status_code = 200
        
        def raise_for_status(self):
            pass
        
        def json(self):
            return {"temperature": 72, "unit": "F"}
    
    def mock_get(*args, **kwargs):
        return MockResponse()
    
    monkeypatch.setattr(requests, "get", mock_get)
    
    client = WeatherClient("fake-key")
    temp = client.get_temperature("New York")
    assert temp == 72


def test_get_temperature_api_error(monkeypatch):
    """Test error handling when API fails."""
    
    def mock_get(*args, **kwargs):
        raise requests.RequestException("Connection failed")
    
    monkeypatch.setattr(requests, "get", mock_get)
    
    client = WeatherClient("fake-key")
    with pytest.raises(requests.RequestException):
        client.get_temperature("New York")

For more complex mocking scenarios, pytest-mock provides a mocker fixture with the full power of unittest.mock:

def test_with_pytest_mock(mocker):
    mock_get = mocker.patch("requests.get")
    mock_get.return_value.json.return_value = {"temperature": 72}
    mock_get.return_value.raise_for_status = mocker.Mock()
    
    client = WeatherClient("fake-key")
    client.get_temperature("Boston")
    
    mock_get.assert_called_once()

Configuration and Best Practices

Configure pytest in pyproject.toml for a clean setup:

[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
python_functions = ["test_*"]
addopts = [
    "-ra",                    # Show extra test summary
    "--strict-markers",       # Error on unknown markers
    "-v",                     # Verbose output
]
markers = [
    "slow: marks tests as slow (deselect with '-m \"not slow\"')",
    "integration: marks tests requiring external services",
]
filterwarnings = [
    "error",                  # Treat warnings as errors
    "ignore::DeprecationWarning:somelib.*",
]

[tool.coverage.run]
source = ["src"]
branch = true

[tool.coverage.report]
exclude_lines = [
    "pragma: no cover",
    "if TYPE_CHECKING:",
    "raise NotImplementedError",
]
fail_under = 80

Essential plugins to install:

  • pytest-cov: Coverage reporting with pytest --cov=src
  • pytest-xdist: Parallel execution with pytest -n auto
  • pytest-randomly: Randomize test order to catch hidden dependencies

For CI/CD, run the full suite with coverage:

pytest --cov=src --cov-report=xml --cov-report=term-missing -n auto

The key to maintainable tests is keeping them focused and independent. Each test should verify one behavior, fixtures should handle shared setup, and parametrization should handle variations. When tests are easy to write and fast to run, you’ll actually write them.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.