Go Testing Package: Writing Effective Tests

Go takes an opinionated stance on testing: you don't need a framework. The standard library's `testing` package handles unit tests, benchmarks, and examples out of the box. This isn't a...

Key Insights

  • Go’s built-in testing package provides everything you need for effective testing without external frameworks—embrace its simplicity rather than fighting it
  • Table-driven tests with t.Run subtests are the idiomatic Go pattern for handling multiple test cases cleanly and getting granular failure reporting
  • Design your code with interfaces from the start; this single practice makes mocking trivial and eliminates the need for complex mocking frameworks

Go’s Testing Philosophy

Go takes an opinionated stance on testing: you don’t need a framework. The standard library’s testing package handles unit tests, benchmarks, and examples out of the box. This isn’t a limitation—it’s a feature.

The conventions are simple. Test files end with _test.go. Test functions start with Test and take a *testing.T parameter. That’s it. No decorators, no magic, no configuration files.

// math.go
package math

func Add(a, b int) int {
    return a + b
}
// math_test.go
package math

import "testing"

func TestAdd(t *testing.T) {
    result := Add(2, 3)
    if result != 5 {
        t.Errorf("Add(2, 3) = %d; want 5", result)
    }
}

Run go test and you’re done. This simplicity scales surprisingly well.

Writing Your First Unit Test

The *testing.T type gives you everything needed to signal test failures and log information. Understanding when to use each method matters.

Use t.Error or t.Errorf when you want to record a failure but continue running the test. Use t.Fatal or t.Fatalf when continuing makes no sense—typically when setup fails. Use t.Log for debugging output that only appears when tests fail or when running with -v.

// stringutil/reverse.go
package stringutil

func Reverse(s string) string {
    runes := []rune(s)
    for i, j := 0, len(runes)-1; i < j; i, j = i+1, j-1 {
        runes[i], runes[j] = runes[j], runes[i]
    }
    return string(runes)
}
// stringutil/reverse_test.go
package stringutil

import "testing"

func TestReverse(t *testing.T) {
    input := "hello"
    want := "olleh"
    
    got := Reverse(input)
    
    if got != want {
        t.Errorf("Reverse(%q) = %q; want %q", input, got, want)
    }
}

func TestReverseEmpty(t *testing.T) {
    if got := Reverse(""); got != "" {
        t.Errorf("Reverse empty string = %q; want empty", got)
    }
}

func TestReverseUnicode(t *testing.T) {
    input := "日本語"
    want := "語本日"
    
    if got := Reverse(input); got != want {
        t.Errorf("Reverse(%q) = %q; want %q", input, got, want)
    }
}

Notice the error message format: Function(input) = got; want expected. This convention makes failures immediately understandable.

Table-Driven Tests

Writing separate test functions for each case doesn’t scale. Table-driven tests let you define inputs and expected outputs in a slice, then iterate through them. Combined with t.Run, you get named subtests that can run independently.

// validator/email.go
package validator

import "regexp"

var emailRegex = regexp.MustCompile(`^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$`)

func ValidateEmail(email string) error {
    if email == "" {
        return ErrEmailEmpty
    }
    if len(email) > 254 {
        return ErrEmailTooLong
    }
    if !emailRegex.MatchString(email) {
        return ErrEmailInvalid
    }
    return nil
}
// validator/email_test.go
package validator

import (
    "errors"
    "strings"
    "testing"
)

func TestValidateEmail(t *testing.T) {
    tests := []struct {
        name    string
        email   string
        wantErr error
    }{
        {
            name:    "valid email",
            email:   "user@example.com",
            wantErr: nil,
        },
        {
            name:    "empty email",
            email:   "",
            wantErr: ErrEmailEmpty,
        },
        {
            name:    "missing at symbol",
            email:   "userexample.com",
            wantErr: ErrEmailInvalid,
        },
        {
            name:    "missing domain",
            email:   "user@",
            wantErr: ErrEmailInvalid,
        },
        {
            name:    "too long",
            email:   strings.Repeat("a", 250) + "@b.com",
            wantErr: ErrEmailTooLong,
        },
        {
            name:    "valid with subdomain",
            email:   "user@mail.example.com",
            wantErr: nil,
        },
    }

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            err := ValidateEmail(tt.email)
            if !errors.Is(err, tt.wantErr) {
                t.Errorf("ValidateEmail(%q) error = %v; want %v", 
                    tt.email, err, tt.wantErr)
            }
        })
    }
}

Run a specific subtest with go test -run TestValidateEmail/empty_email. This granularity helps when debugging failures.

Test Helpers and Setup/Teardown

When test setup gets repetitive, extract it into helper functions. Mark them with t.Helper() so failure line numbers point to the actual test, not the helper.

// repository/user_test.go
package repository

import (
    "database/sql"
    "testing"
    
    _ "github.com/mattn/go-sqlite3"
)

func setupTestDB(t *testing.T) *sql.DB {
    t.Helper()
    
    db, err := sql.Open("sqlite3", ":memory:")
    if err != nil {
        t.Fatalf("failed to open database: %v", err)
    }
    
    _, err = db.Exec(`
        CREATE TABLE users (
            id INTEGER PRIMARY KEY,
            email TEXT UNIQUE NOT NULL,
            name TEXT NOT NULL
        )
    `)
    if err != nil {
        db.Close()
        t.Fatalf("failed to create schema: %v", err)
    }
    
    t.Cleanup(func() {
        db.Close()
    })
    
    return db
}

func TestUserRepository_Create(t *testing.T) {
    db := setupTestDB(t)
    repo := NewUserRepository(db)
    
    user := &User{Email: "test@example.com", Name: "Test User"}
    
    err := repo.Create(user)
    if err != nil {
        t.Fatalf("Create() error = %v", err)
    }
    
    if user.ID == 0 {
        t.Error("Create() did not set user ID")
    }
}

func TestUserRepository_FindByEmail(t *testing.T) {
    db := setupTestDB(t)
    repo := NewUserRepository(db)
    
    // Setup: create a user first
    want := &User{Email: "find@example.com", Name: "Find Me"}
    if err := repo.Create(want); err != nil {
        t.Fatalf("setup failed: %v", err)
    }
    
    got, err := repo.FindByEmail("find@example.com")
    if err != nil {
        t.Fatalf("FindByEmail() error = %v", err)
    }
    
    if got.Name != want.Name {
        t.Errorf("FindByEmail() name = %q; want %q", got.Name, want.Name)
    }
}

The t.Cleanup function registers teardown code that runs after the test completes, even if it fails. For package-level setup, use TestMain:

func TestMain(m *testing.M) {
    // Setup code here
    code := m.Run()
    // Teardown code here
    os.Exit(code)
}

Mocking and Interfaces

Go doesn’t need mocking frameworks. Define interfaces for external dependencies, then create simple mock implementations for tests.

// client/weather.go
package client

import (
    "context"
    "encoding/json"
    "net/http"
)

type HTTPClient interface {
    Do(req *http.Request) (*http.Response, error)
}

type WeatherClient struct {
    httpClient HTTPClient
    baseURL    string
}

func NewWeatherClient(client HTTPClient, baseURL string) *WeatherClient {
    return &WeatherClient{
        httpClient: client,
        baseURL:    baseURL,
    }
}

type Weather struct {
    Temperature float64 `json:"temperature"`
    Condition   string  `json:"condition"`
}

func (c *WeatherClient) GetCurrent(ctx context.Context, city string) (*Weather, error) {
    req, err := http.NewRequestWithContext(ctx, "GET", 
        c.baseURL+"/weather?city="+city, nil)
    if err != nil {
        return nil, err
    }
    
    resp, err := c.httpClient.Do(req)
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()
    
    var weather Weather
    if err := json.NewDecoder(resp.Body).Decode(&weather); err != nil {
        return nil, err
    }
    
    return &weather, nil
}
// client/weather_test.go
package client

import (
    "bytes"
    "context"
    "io"
    "net/http"
    "testing"
)

type mockHTTPClient struct {
    response *http.Response
    err      error
}

func (m *mockHTTPClient) Do(req *http.Request) (*http.Response, error) {
    return m.response, m.err
}

func TestWeatherClient_GetCurrent(t *testing.T) {
    mockResp := &http.Response{
        StatusCode: 200,
        Body: io.NopCloser(bytes.NewBufferString(
            `{"temperature": 22.5, "condition": "sunny"}`)),
    }
    
    mock := &mockHTTPClient{response: mockResp}
    client := NewWeatherClient(mock, "http://api.example.com")
    
    weather, err := client.GetCurrent(context.Background(), "London")
    if err != nil {
        t.Fatalf("GetCurrent() error = %v", err)
    }
    
    if weather.Temperature != 22.5 {
        t.Errorf("Temperature = %f; want 22.5", weather.Temperature)
    }
    
    if weather.Condition != "sunny" {
        t.Errorf("Condition = %q; want sunny", weather.Condition)
    }
}

The key insight: accept interfaces, return structs. This makes your code naturally testable.

Code Coverage and Benchmarks

Generate coverage reports with go test -cover. For detailed HTML reports:

go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out -o coverage.html

Don’t chase 100% coverage. Focus on testing behavior that matters.

Benchmarks use *testing.B and run the code b.N times:

// sort/sort_test.go
package sort

import "testing"

func BenchmarkBubbleSort(b *testing.B) {
    for i := 0; i < b.N; i++ {
        b.StopTimer()
        data := generateRandomSlice(1000)
        b.StartTimer()
        
        BubbleSort(data)
    }
}

func BenchmarkQuickSort(b *testing.B) {
    for i := 0; i < b.N; i++ {
        b.StopTimer()
        data := generateRandomSlice(1000)
        b.StartTimer()
        
        QuickSort(data)
    }
}

Run benchmarks with go test -bench=.. Use b.StopTimer() and b.StartTimer() to exclude setup from measurements.

Best Practices and Common Pitfalls

Keep tests fast. Slow tests don’t get run. Mock external services, use in-memory databases, parallelize with t.Parallel().

Keep tests independent. Each test should set up its own state. Shared mutable state between tests causes flaky failures that waste hours to debug.

Run the race detector. Use go test -race in CI. Race conditions are notoriously hard to reproduce, and the race detector catches them deterministically.

Test behavior, not implementation. Tests that break when you refactor internals are worse than no tests. Focus on public APIs and observable outcomes.

Use integration tests sparingly. They’re slow and brittle. Unit test the logic, integration test the boundaries.

The Go testing package rewards simplicity. Resist the urge to add frameworks, assertion libraries, or complex abstractions. Write straightforward tests, run them often, and let the tooling do its job.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.