Go Race Detector: Finding Data Races

A data race happens when two or more goroutines access the same memory location concurrently, and at least one of those accesses is a write. The result is undefined behavior—your program might crash,...

Key Insights

  • Data races occur when goroutines access shared memory concurrently without synchronization, leading to unpredictable behavior that’s nearly impossible to debug without tooling
  • Go’s race detector (-race flag) instruments your code at compile time to catch races during execution, but adds 5-10x overhead so use it in testing, not production
  • Fix races with mutexes for complex state, channels for communication patterns, and atomic operations for simple counters—choose based on your coordination needs, not performance assumptions

What Are Data Races?

A data race happens when two or more goroutines access the same memory location concurrently, and at least one of those accesses is a write. The result is undefined behavior—your program might crash, silently corrupt data, or appear to work correctly until it doesn’t.

Data races are insidious because they’re non-deterministic. Your code might run perfectly in development but fail randomly in production when timing conditions align differently. Traditional debugging techniques like print statements or debuggers often alter the timing enough to hide the race entirely.

Here’s the canonical example of a data race:

package main

import (
    "fmt"
    "sync"
)

func main() {
    counter := 0
    var wg sync.WaitGroup

    for i := 0; i < 1000; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            counter++ // Race condition here
        }()
    }

    wg.Wait()
    fmt.Println("Counter:", counter)
}

You might expect this to print Counter: 1000, but you’ll get different values on different runs—maybe 987, maybe 1000, maybe 743. The increment operation isn’t atomic; it’s actually three operations: read the value, add one, write it back. When goroutines interleave these operations, updates get lost.

Enabling the Race Detector

Go’s race detector is built into the toolchain. Enable it by adding the -race flag to any go command:

# Run a program with race detection
go run -race main.go

# Test with race detection (this is where you'll use it most)
go test -race ./...

# Build a binary with race detection
go build -race -o myapp

# Install with race detection
go install -race

The race detector uses compile-time instrumentation and a runtime library to track memory accesses. This comes with significant overhead:

  • Memory usage increases by 5-10x
  • Execution time increases by 2-20x depending on the workload
  • Binary size increases

Because of this overhead, you should never deploy race-detector-enabled binaries to production. Instead, use it extensively in:

  • Development environments during coding
  • Automated test suites in CI/CD
  • Load testing and staging environments
  • Manual QA testing

The detector only finds races that actually execute during the run. If a race condition exists in code that doesn’t run during your test, it won’t be detected. This is why comprehensive test coverage matters.

Detecting Common Race Patterns

Concurrent Map Access

Maps in Go are not safe for concurrent use. Even reading from a map while another goroutine writes to it causes a race:

package main

import "sync"

func main() {
    m := make(map[string]int)
    var wg sync.WaitGroup

    // Writer goroutine
    wg.Add(1)
    go func() {
        defer wg.Done()
        for i := 0; i < 100; i++ {
            m["key"] = i
        }
    }()

    // Reader goroutine
    wg.Add(1)
    go func() {
        defer wg.Done()
        for i := 0; i < 100; i++ {
            _ = m["key"]
        }
    }()

    wg.Wait()
}

Run this with -race and you’ll get a detailed report. Without the race detector, this might panic with “concurrent map read and map write” or silently corrupt memory.

Loop Variable Capture

A classic gotcha when launching goroutines in loops:

package main

import (
    "fmt"
    "sync"
)

func main() {
    var wg sync.WaitGroup
    items := []string{"a", "b", "c"}

    for _, item := range items {
        wg.Add(1)
        go func() {
            defer wg.Done()
            fmt.Println(item) // Race: all goroutines access same variable
        }()
    }

    wg.Wait()
}

The item variable is shared across all goroutines. As the loop continues, its value changes while goroutines are reading it. You might see “c” printed three times.

Shared Slice Modifications

Appending to a shared slice from multiple goroutines creates races:

package main

import "sync"

func main() {
    var results []int
    var wg sync.WaitGroup

    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func(val int) {
            defer wg.Done()
            results = append(results, val) // Race condition
        }(i)
    }

    wg.Wait()
}

The append operation reads the slice header, potentially allocates, and writes back. Multiple goroutines doing this concurrently will race.

Interpreting Race Detector Output

When the race detector finds a race, it prints a detailed report:

==================
WARNING: DATA RACE
Write at 0x00c000018090 by goroutine 7:
  main.main.func1()
      /path/to/main.go:12 +0x44

Previous read at 0x00c000018090 by goroutine 6:
  main.main.func1()
      /path/to/main.go:12 +0x3a

Goroutine 7 (running) created at:
  main.main()
      /path/to/main.go:10 +0x8c

Goroutine 6 (finished) created at:
  main.main()
      /path/to/main.go:10 +0x8c
==================

Breaking this down:

  • Memory address: 0x00c000018090 is where the race occurred
  • Current access: “Write at…” shows what triggered the report
  • Conflicting access: “Previous read at…” shows the racing access
  • Stack traces: Show exactly where in your code both accesses happened
  • Goroutine creation: Shows where the racing goroutines were spawned

The detector reports the first race it finds and continues execution. You might see multiple race reports in a single run.

Fixing Data Races

Mutexes for Complex State

When you need to protect complex state or multiple operations, use sync.Mutex:

package main

import (
    "fmt"
    "sync"
)

type SafeCounter struct {
    mu    sync.Mutex
    value int
}

func (c *SafeCounter) Increment() {
    c.mu.Lock()
    c.value++
    c.mu.Unlock()
}

func (c *SafeCounter) Value() int {
    c.mu.Lock()
    defer c.mu.Unlock()
    return c.value
}

func main() {
    counter := &SafeCounter{}
    var wg sync.WaitGroup

    for i := 0; i < 1000; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            counter.Increment()
        }()
    }

    wg.Wait()
    fmt.Println("Counter:", counter.Value())
}

Concurrent-Safe Maps

For maps, use sync.Map which is optimized for concurrent access:

package main

import (
    "sync"
)

func main() {
    var m sync.Map
    var wg sync.WaitGroup

    // Writer
    wg.Add(1)
    go func() {
        defer wg.Done()
        for i := 0; i < 100; i++ {
            m.Store("key", i)
        }
    }()

    // Reader
    wg.Add(1)
    go func() {
        defer wg.Done()
        for i := 0; i < 100; i++ {
            m.Load("key")
        }
    }()

    wg.Wait()
}

Channels for Communication

When goroutines need to communicate, channels are often cleaner than shared memory:

package main

import "fmt"

func main() {
    results := make(chan int, 10)

    // Workers send results
    for i := 0; i < 10; i++ {
        go func(val int) {
            results <- val
        }(i)
    }

    // Collector receives results
    collected := make([]int, 0, 10)
    for i := 0; i < 10; i++ {
        collected = append(collected, <-results)
    }

    fmt.Println(collected)
}

Atomic Operations for Simple Values

For simple counters or flags, use the sync/atomic package:

package main

import (
    "fmt"
    "sync"
    "sync/atomic"
)

func main() {
    var counter int64
    var wg sync.WaitGroup

    for i := 0; i < 1000; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            atomic.AddInt64(&counter, 1)
        }()
    }

    wg.Wait()
    fmt.Println("Counter:", atomic.LoadInt64(&counter))
}

Atomic operations are faster than mutexes but only work for specific types and simple operations.

Limitations and Best Practices

The race detector has limitations you need to understand:

It only detects races that actually happen during execution. If your test suite doesn’t exercise a code path, races in that path won’t be detected. This makes comprehensive testing critical.

It can’t predict all possible races. The detector is not a static analysis tool—it won’t tell you “this code could race under certain conditions.” It only reports races that occur.

Performance overhead limits where you can use it. You can’t run race detection in production, so you might miss races that only occur under production load patterns.

Integrate race detection into your development workflow:

# Makefile example
.PHONY: test
test:
	go test -race -timeout 30s ./...

.PHONY: test-verbose
test-verbose:
	go test -race -v -timeout 30s ./...

For CI/CD, ensure race detection runs on every commit:

# GitHub Actions example
- name: Run tests with race detector
  run: go test -race -timeout 5m ./...

Best practices:

  • Run tests with -race regularly during development
  • Make race detection mandatory in CI/CD pipelines
  • Use -race during load testing in staging environments
  • Fix races immediately—they don’t get easier to debug later
  • Prefer channels for communication, mutexes for protecting state
  • Don’t optimize for performance until you’ve eliminated races
  • Remember that passing race detection doesn’t guarantee absence of races, only that detected executions were race-free

The race detector is one of Go’s most valuable tools for concurrent programming. Use it liberally, fix what it finds, and you’ll avoid entire categories of production bugs.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.