Test Fixtures: Setup and Teardown Patterns

A test fixture is the baseline state your test needs to run. It's the user account that must exist before you test login, the database records required for your query tests, and the mock server that...

Key Insights

  • Test fixtures establish the preconditions your tests need, and proper setup/teardown patterns are the difference between a reliable test suite and one plagued by flaky, order-dependent failures.
  • Factory functions and builders beat static fixture files for maintainability—they make test data creation explicit, composable, and resistant to schema changes.
  • Database isolation through transactions (not truncation) gives you fast, reliable tests that can run in parallel without stepping on each other.

What Are Test Fixtures?

A test fixture is the baseline state your test needs to run. It’s the user account that must exist before you test login, the database records required for your query tests, and the mock server that stands in for a third-party API. Get fixtures wrong, and you’ll spend more time debugging test failures than actual bugs.

The core principle is simple: every test should start from a known state and leave no trace when it finishes. This isolation guarantees that tests can run in any order, in parallel, and produce consistent results.

Here’s the basic structure in pytest:

import pytest

@pytest.fixture
def user():
    """Create a test user before each test."""
    user = User.create(email="test@example.com", name="Test User")
    yield user
    user.delete()  # Teardown happens after the test

def test_user_can_update_profile(user):
    user.update(name="Updated Name")
    assert user.name == "Updated Name"

And the equivalent in Jest:

describe('User', () => {
  let user;

  beforeEach(async () => {
    user = await User.create({ email: 'test@example.com', name: 'Test User' });
  });

  afterEach(async () => {
    await user.delete();
  });

  it('can update profile', async () => {
    await user.update({ name: 'Updated Name' });
    expect(user.name).toBe('Updated Name');
  });
});

Common Setup Patterns

Per-Test vs. Shared Fixtures

Use per-test fixtures (beforeEach) when tests modify the fixture state. Use shared fixtures (beforeAll) for expensive, read-only resources like database connections or compiled schemas.

describe('OrderService', () => {
  let dbConnection; // Shared - expensive to create
  let order;        // Per-test - gets modified

  beforeAll(async () => {
    dbConnection = await Database.connect();
  });

  afterAll(async () => {
    await dbConnection.close();
  });

  beforeEach(async () => {
    order = await OrderFactory.create();
  });

  afterEach(async () => {
    await order.destroy();
  });
});

Factory Functions

Static fixture files (JSON, YAML) become maintenance nightmares. Factory functions let you create exactly what each test needs with sensible defaults:

def create_user(
    email: str = "test@example.com",
    name: str = "Test User",
    role: str = "member",
    verified: bool = True,
    **overrides
) -> User:
    """Factory function with sensible defaults."""
    return User.create(
        email=email,
        name=name,
        role=role,
        verified=verified,
        **overrides
    )

# Usage in tests
def test_admin_can_delete_users():
    admin = create_user(role="admin")
    target = create_user(email="target@example.com")
    
    admin.delete_user(target)
    
    assert User.find_by_email("target@example.com") is None

Builder Pattern for Complex Objects

When objects have many optional fields or complex relationships, builders make tests readable:

class OrderBuilder:
    def __init__(self):
        self._customer = None
        self._items = []
        self._discount = None
        self._shipping_address = None

    def for_customer(self, customer):
        self._customer = customer
        return self

    def with_item(self, product, quantity=1, price=None):
        self._items.append({
            'product': product,
            'quantity': quantity,
            'price': price or product.price
        })
        return self

    def with_discount(self, code, percentage):
        self._discount = {'code': code, 'percentage': percentage}
        return self

    def shipped_to(self, address):
        self._shipping_address = address
        return self

    def build(self):
        order = Order.create(customer=self._customer or create_user())
        for item in self._items:
            order.add_item(**item)
        if self._discount:
            order.apply_discount(**self._discount)
        if self._shipping_address:
            order.set_shipping(self._shipping_address)
        return order

# Clean, readable test setup
def test_discount_applies_to_all_items():
    order = (OrderBuilder()
        .with_item(create_product(price=100), quantity=2)
        .with_item(create_product(price=50), quantity=1)
        .with_discount("SAVE10", percentage=10)
        .build())
    
    assert order.total == 225  # (200 + 50) * 0.9

Teardown Strategies

Transaction Rollback

The fastest and most reliable cleanup strategy for database tests is wrapping each test in a transaction that never commits:

import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker

@pytest.fixture
def db_session():
    """Provide a transactional scope for tests."""
    engine = create_engine(TEST_DATABASE_URL)
    connection = engine.connect()
    transaction = connection.begin()
    
    Session = sessionmaker(bind=connection)
    session = Session()
    
    yield session
    
    session.close()
    transaction.rollback()  # Undo everything
    connection.close()

def test_user_creation(db_session):
    user = User(email="test@example.com")
    db_session.add(user)
    db_session.flush()
    
    assert db_session.query(User).count() == 1
    # Transaction rolls back - no cleanup needed

External Resource Cleanup

For files and other resources, use context managers or ensure cleanup runs even when tests fail:

import tempfile
import shutil
from contextlib import contextmanager

@contextmanager
def temp_directory():
    """Create a temporary directory that's always cleaned up."""
    path = tempfile.mkdtemp()
    try:
        yield path
    finally:
        shutil.rmtree(path, ignore_errors=True)

def test_file_export(db_session):
    with temp_directory() as tmpdir:
        exporter = DataExporter(output_dir=tmpdir)
        exporter.export_users()
        
        assert os.path.exists(os.path.join(tmpdir, "users.csv"))
    # Directory is gone regardless of test outcome

Managing Test Dependencies

Dependency Injection for Testability

Design your code to accept dependencies rather than creating them internally:

class PaymentService:
    def __init__(self, gateway, notifier):
        self.gateway = gateway
        self.notifier = notifier

    def process_payment(self, order):
        result = self.gateway.charge(order.total, order.payment_method)
        if result.success:
            self.notifier.send_receipt(order.customer, result)
        return result

# In tests, inject mocks
@pytest.fixture
def payment_service():
    mock_gateway = Mock(spec=PaymentGateway)
    mock_gateway.charge.return_value = PaymentResult(success=True, transaction_id="txn_123")
    
    mock_notifier = Mock(spec=Notifier)
    
    return PaymentService(gateway=mock_gateway, notifier=mock_notifier)

def test_successful_payment_sends_receipt(payment_service):
    order = create_order(total=100)
    
    payment_service.process_payment(order)
    
    payment_service.notifier.send_receipt.assert_called_once()

Fixture Composition

Build complex fixtures from simpler ones:

@pytest.fixture
def user(db_session):
    return create_user()

@pytest.fixture
def product(db_session):
    return create_product()

@pytest.fixture
def order_with_items(db_session, user, product):
    """Composed fixture that depends on user and product."""
    order = Order.create(customer=user)
    order.add_item(product, quantity=2)
    return order

def test_order_total(order_with_items, product):
    assert order_with_items.total == product.price * 2

Database Fixture Patterns

Docker-Based Test Database

For integration tests, spin up a real database in a container:

import pytest
import docker

@pytest.fixture(scope="session")
def postgres_container():
    """Start a PostgreSQL container for the test session."""
    client = docker.from_env()
    container = client.containers.run(
        "postgres:15",
        environment={
            "POSTGRES_USER": "test",
            "POSTGRES_PASSWORD": "test",
            "POSTGRES_DB": "test_db"
        },
        ports={"5432/tcp": None},  # Random available port
        detach=True,
    )
    
    # Wait for database to be ready
    import time
    time.sleep(2)
    
    port = container.attrs['NetworkSettings']['Ports']['5432/tcp'][0]['HostPort']
    
    yield f"postgresql://test:test@localhost:{port}/test_db"
    
    container.stop()
    container.remove()

@pytest.fixture
def db_session(postgres_container):
    engine = create_engine(postgres_container)
    # Run migrations
    Base.metadata.create_all(engine)
    # ... rest of transaction fixture

Anti-Patterns to Avoid

Shared Mutable State

This is the most common source of flaky tests:

# BAD: Shared mutable state
class TestUserService:
    users = []  # Shared across all tests!
    
    def test_create_user(self):
        self.users.append(create_user())
        assert len(self.users) == 1  # Fails if another test ran first
    
    def test_list_users(self):
        assert len(self.users) == 0  # Depends on execution order

# GOOD: Isolated state per test
class TestUserService:
    @pytest.fixture(autouse=True)
    def setup(self):
        self.users = []
    
    def test_create_user(self):
        self.users.append(create_user())
        assert len(self.users) == 1  # Always passes

Fixtures That Do Too Much

Keep fixtures focused. If a fixture creates a user, configures permissions, sets up billing, and creates sample data, split it up:

# BAD: Monolithic fixture
@pytest.fixture
def everything():
    user = create_user()
    user.add_permission("admin")
    billing = setup_billing(user)
    create_sample_orders(user, count=10)
    return user, billing

# GOOD: Composable fixtures
@pytest.fixture
def user():
    return create_user()

@pytest.fixture
def admin_user(user):
    user.add_permission("admin")
    return user

@pytest.fixture
def user_with_billing(user):
    return user, setup_billing(user)

Framework-Specific Best Practices

pytest (Python)

Use conftest.py for shared fixtures and leverage scopes:

# conftest.py
@pytest.fixture(scope="session")
def database():
    """Once per test session."""
    pass

@pytest.fixture(scope="module")
def api_client():
    """Once per test module."""
    pass

@pytest.fixture(autouse=True)
def reset_mocks():
    """Runs automatically for every test."""
    yield
    mock.reset_all()

Jest/Vitest (JavaScript)

Use describe blocks for fixture scoping:

describe('with authenticated user', () => {
  let authToken;
  
  beforeAll(async () => {
    authToken = await login(testUser);
  });
  
  describe('profile operations', () => {
    // Nested describe inherits authToken
    it('can fetch profile', async () => {
      const profile = await api.getProfile(authToken);
      expect(profile).toBeDefined();
    });
  });
});

JUnit 5 (Java)

Use @BeforeEach, @BeforeAll, and test interfaces for fixture reuse:

interface DatabaseTest {
    @BeforeEach
    default void setupDatabase(TestInfo info) {
        // Shared setup for all database tests
    }
}

class UserRepositoryTest implements DatabaseTest {
    // Automatically gets database setup
}

Conclusion

Good test fixtures are invisible—they set up exactly what’s needed and clean up without you thinking about it. Start with factory functions for simple cases, graduate to builders for complex objects, and always wrap database operations in transactions. Your future self, debugging a test failure at 11 PM, will thank you.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.