Python pytest-asyncio: Testing Async Code

Async Python code has become the standard for I/O-bound applications. Whether you're building web services with FastAPI, making HTTP requests with httpx, or working with async database drivers,...

Key Insights

  • pytest-asyncio bridges the gap between pytest’s synchronous test runner and Python’s async/await syntax by managing event loop lifecycle automatically
  • The auto mode eliminates decorator boilerplate but strict mode provides explicit control—choose based on your team’s preference for convention over configuration
  • Async fixtures require the @pytest_asyncio.fixture decorator (not the standard @pytest.fixture) and support the same scopes as synchronous fixtures

Async Python code has become the standard for I/O-bound applications. Whether you’re building web services with FastAPI, making HTTP requests with httpx, or working with async database drivers, you’ll eventually need to test coroutines. The problem? pytest runs synchronously by default, and trying to test async code without proper tooling leads to frustrating errors.

Why Standard pytest Fails with Async Code

When you write an async test function and run it with vanilla pytest, you don’t get an error about async—you get a passing test that never actually ran. pytest sees your coroutine object, doesn’t know what to do with it, and moves on.

# This test "passes" but never executes the assertion
async def test_broken():
    result = await some_async_function()
    assert result == "expected"  # Never runs

pytest collects the test, calls the function, receives a coroutine object, and discards it. No event loop exists to execute the coroutine. You might also encounter RuntimeError: no running event loop when your test tries to interact with async primitives.

pytest-asyncio solves this by providing an event loop, running your coroutines to completion, and integrating cleanly with pytest’s fixture system.

Setup and Configuration

Install pytest-asyncio alongside pytest:

pip install pytest pytest-asyncio

Configuration goes in pyproject.toml (preferred) or pytest.ini:

# pyproject.toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "function"

The asyncio_mode setting has two options:

  • strict: Requires explicit @pytest.mark.asyncio on every async test. Verbose but clear.
  • auto: Automatically treats all async test functions as async tests. Less boilerplate, but implicit.

I recommend auto mode for projects that are primarily async. The explicitness of strict mode adds noise without proportional benefit when most of your tests are async anyway.

The asyncio_default_fixture_loop_scope controls which event loop async fixtures use by default. Keep it at function unless you have specific needs for shared state across tests.

Writing Your First Async Test

With auto mode configured, async tests look nearly identical to sync tests:

# app/services.py
import httpx

async def fetch_user(user_id: int) -> dict:
    async with httpx.AsyncClient() as client:
        response = await client.get(f"https://api.example.com/users/{user_id}")
        response.raise_for_status()
        return response.json()
# tests/test_services.py
import pytest
from unittest.mock import AsyncMock, patch
from app.services import fetch_user

async def test_fetch_user_returns_user_data():
    mock_response = AsyncMock()
    mock_response.json.return_value = {"id": 1, "name": "Alice"}
    mock_response.raise_for_status = AsyncMock()
    
    with patch("app.services.httpx.AsyncClient") as mock_client:
        mock_client.return_value.__aenter__.return_value.get = AsyncMock(
            return_value=mock_response
        )
        
        result = await fetch_user(1)
        
        assert result == {"id": 1, "name": "Alice"}

If you’re using strict mode, add the decorator:

@pytest.mark.asyncio
async def test_fetch_user_returns_user_data():
    # ... same test body

Async Fixtures

Async fixtures enable setup and teardown operations that require await. The critical detail: use @pytest_asyncio.fixture, not the standard @pytest.fixture.

import pytest_asyncio
import asyncpg

@pytest_asyncio.fixture
async def db_connection():
    conn = await asyncpg.connect(
        host="localhost",
        database="test_db",
        user="postgres",
        password="secret"
    )
    yield conn
    await conn.close()

async def test_user_query(db_connection):
    result = await db_connection.fetch("SELECT * FROM users WHERE id = $1", 1)
    assert len(result) == 1

The yield pattern works exactly like sync fixtures—code before yield is setup, code after is teardown. The fixture runs within the test’s event loop.

For expensive resources like database connection pools, use broader scopes:

@pytest_asyncio.fixture(scope="session")
async def db_pool():
    pool = await asyncpg.create_pool(
        host="localhost",
        database="test_db",
        min_size=2,
        max_size=10
    )
    yield pool
    await pool.close()

@pytest_asyncio.fixture
async def db_connection(db_pool):
    async with db_pool.acquire() as conn:
        yield conn

The session-scoped pool persists across all tests, while each test gets its own connection from that pool.

Testing Real-World Async Patterns

Mocking HTTP Clients

Real applications make HTTP requests. Tests shouldn’t hit actual APIs. Here’s a pattern for testing httpx-based code:

import pytest
from unittest.mock import AsyncMock, MagicMock
import httpx

async def get_weather(city: str) -> dict:
    async with httpx.AsyncClient() as client:
        response = await client.get(
            "https://api.weather.com/current",
            params={"city": city}
        )
        return {"city": city, "temp": response.json()["temperature"]}

@pytest.fixture
def mock_httpx_client(monkeypatch):
    mock_response = MagicMock()
    mock_response.json.return_value = {"temperature": 72}
    
    mock_client = AsyncMock()
    mock_client.__aenter__.return_value.get = AsyncMock(return_value=mock_response)
    
    monkeypatch.setattr(httpx, "AsyncClient", lambda: mock_client)
    return mock_client

async def test_get_weather(mock_httpx_client):
    result = await get_weather("Seattle")
    
    assert result == {"city": "Seattle", "temp": 72}

Testing Concurrent Operations

When your code uses asyncio.gather or asyncio.create_task, test that concurrency behaves correctly:

import asyncio

async def fetch_all_users(user_ids: list[int]) -> list[dict]:
    async def fetch_one(uid):
        # Simulated async fetch
        await asyncio.sleep(0.01)
        return {"id": uid, "name": f"User {uid}"}
    
    return await asyncio.gather(*[fetch_one(uid) for uid in user_ids])

async def test_fetch_all_users_concurrent():
    result = await fetch_all_users([1, 2, 3])
    
    assert len(result) == 3
    assert all(isinstance(u, dict) for u in result)
    assert {u["id"] for u in result} == {1, 2, 3}

For testing that operations actually run concurrently (not sequentially), measure timing:

import time

async def test_operations_run_concurrently():
    start = time.monotonic()
    result = await fetch_all_users([1, 2, 3, 4, 5])
    elapsed = time.monotonic() - start
    
    # 5 operations at 0.01s each should take ~0.01s concurrent, not 0.05s sequential
    assert elapsed < 0.03
    assert len(result) == 5

Common Pitfalls and Debugging

Event Loop Lifecycle Issues

The most common mistake is trying to share async resources across different event loops. Each test function (with function-scoped fixtures) gets its own loop. Session-scoped async fixtures share a loop, but that loop differs from function-scoped test loops.

# WRONG: This will fail with "attached to a different loop" errors
@pytest_asyncio.fixture(scope="session")
async def shared_client():
    return httpx.AsyncClient()  # Created in session loop

async def test_something(shared_client):
    # Runs in function loop - different from session loop!
    await shared_client.get("https://example.com")  # Fails

The fix: either use function scope or ensure the resource doesn’t bind to a specific loop.

Handling Timeouts

Async tests can hang forever if something goes wrong. Add timeouts:

import asyncio

async def test_with_timeout():
    async def slow_operation():
        await asyncio.sleep(10)
        return "done"
    
    with pytest.raises(asyncio.TimeoutError):
        await asyncio.wait_for(slow_operation(), timeout=0.1)

For test-level timeouts, use pytest-timeout or configure in pytest:

[tool.pytest.ini_options]
timeout = 30

Mixing Sync and Async Fixtures

Sync fixtures can depend on async fixtures, but not vice versa. The async fixture runs first, and its result passes to the sync fixture:

@pytest_asyncio.fixture
async def async_data():
    await asyncio.sleep(0.01)
    return {"key": "value"}

@pytest.fixture
def processed_data(async_data):
    # async_data is already resolved here
    return {**async_data, "processed": True}

async def test_combined(processed_data):
    assert processed_data == {"key": "value", "processed": True}

Best Practices and Performance Tips

Structure tests by behavior, not by async-ness. Don’t create separate directories for async vs sync tests. Group tests by the feature they verify.

Use session-scoped fixtures for expensive resources. Database pools, Redis connections, and HTTP client sessions should persist across tests when possible. Just ensure proper cleanup.

Run tests in parallel with pytest-xdist. Async tests parallelize well because they’re already designed for concurrent execution:

pip install pytest-xdist
pytest -n auto  # Uses all available CPU cores

Consider anyio for framework-agnostic testing. If your code might run on Trio instead of asyncio, pytest-anyio provides similar functionality with backend flexibility. For pure asyncio codebases, pytest-asyncio remains the simpler choice.

Keep async tests focused. Each test should verify one behavior. The async overhead is minimal—don’t batch unrelated assertions to “save time.”

Testing async code requires understanding event loop lifecycle and using the right decorators, but pytest-asyncio handles the complexity. Configure it once, use async fixtures where needed, and write tests that look almost identical to their synchronous counterparts.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.