Go sync.Mutex: Mutual Exclusion Locks
Go's concurrency model makes it trivial to spin up thousands of goroutines, but this power comes with responsibility. When multiple goroutines access shared memory simultaneously, you face race...
Key Insights
- Race conditions occur when multiple goroutines access shared memory without synchronization, leading to unpredictable behavior that
sync.Mutexprevents through mutual exclusion - Always use
defer mutex.Unlock()immediately after locking to guarantee unlocking even during panics, and keep critical sections as small as possible to maximize concurrency - For read-heavy workloads,
sync.RWMutexprovides significant performance improvements by allowing multiple concurrent readers while still protecting writes with exclusive locks
Introduction to Race Conditions
Go’s concurrency model makes it trivial to spin up thousands of goroutines, but this power comes with responsibility. When multiple goroutines access shared memory simultaneously, you face race conditions—situations where the program’s behavior depends on the unpredictable timing of goroutine execution.
Consider a simple counter that multiple goroutines increment:
package main
import (
"fmt"
"sync"
)
func main() {
var counter int
var wg sync.WaitGroup
// Launch 1000 goroutines that each increment counter 1000 times
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for j := 0; j < 1000; j++ {
counter++ // RACE CONDITION!
}
}()
}
wg.Wait()
fmt.Printf("Expected: 1000000, Got: %d\n", counter)
}
Run this code and you’ll get different results each time—values like 823,491 or 957,234 instead of the expected 1,000,000. The problem? The counter++ operation isn’t atomic. It actually involves three steps: read the current value, increment it, and write it back. When goroutines interleave these steps, updates get lost.
You can detect these issues by running your code with Go’s race detector: go run -race main.go. This tool will immediately flag the concurrent access problem.
Understanding sync.Mutex Basics
The sync.Mutex type provides mutual exclusion—ensuring only one goroutine can access a critical section at a time. It’s a lock with two primary methods: Lock() and Unlock().
When a goroutine calls Lock(), it acquires the mutex. If another goroutine already holds it, the calling goroutine blocks until the mutex becomes available. Calling Unlock() releases the mutex, allowing other waiting goroutines to proceed.
Here’s the counter example fixed with a mutex:
package main
import (
"fmt"
"sync"
)
func main() {
var counter int
var mu sync.Mutex
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for j := 0; j < 1000; j++ {
mu.Lock()
counter++
mu.Unlock()
}
}()
}
wg.Wait()
fmt.Printf("Expected: 1000000, Got: %d\n", counter)
}
Now you’ll consistently get 1,000,000. The mutex ensures that only one goroutine modifies counter at any given time. The code between Lock() and Unlock() is the critical section—the protected region where shared state is accessed.
Critical Sections and Best Practices
The golden rule for mutexes: keep critical sections as small as possible. Every microsecond spent holding a lock is time other goroutines spend waiting. Only protect the actual shared state access, not unrelated operations.
Always use defer to ensure unlocking happens even if the code panics:
func (s *SafeStore) UpdateValue(key string, value int) error {
s.mu.Lock()
defer s.mu.Unlock()
// If validation panics, mutex still gets unlocked
if err := s.validate(key, value); err != nil {
return err
}
s.data[key] = value
return nil
}
Without defer, a panic would leave the mutex locked forever, deadlocking your program. This is such a common pattern that you should make it automatic: Lock() on one line, defer Unlock() on the next.
Deadlocks occur when goroutines wait for each other in a cycle. Here’s a classic mistake:
type Account struct {
mu sync.Mutex
balance int
}
// DEADLOCK RISK!
func Transfer(from, to *Account, amount int) {
from.mu.Lock()
defer from.mu.Unlock()
to.mu.Lock() // If another goroutine locked 'to' then 'from', deadlock!
defer to.mu.Unlock()
from.balance -= amount
to.balance += amount
}
If two goroutines simultaneously call Transfer(a, b, 100) and Transfer(b, a, 50), they deadlock. The solution is to establish a consistent locking order—for example, always lock accounts in order of their memory address or ID.
RWMutex for Read-Heavy Workloads
Many data structures are read far more often than they’re written. A regular mutex treats all operations equally, but sync.RWMutex distinguishes between readers and writers. Multiple goroutines can hold read locks simultaneously, but write locks are exclusive.
Use RLock()/RUnlock() for read operations and Lock()/Unlock() for writes:
package main
import (
"fmt"
"sync"
"time"
)
type Cache struct {
mu sync.RWMutex
items map[string]string
}
func NewCache() *Cache {
return &Cache{
items: make(map[string]string),
}
}
func (c *Cache) Get(key string) (string, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
val, ok := c.items[key]
return val, ok
}
func (c *Cache) Set(key, value string) {
c.mu.Lock()
defer c.mu.Unlock()
c.items[key] = value
}
func main() {
cache := NewCache()
cache.Set("user:1", "Alice")
var wg sync.WaitGroup
// 100 concurrent readers - no blocking between them
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for j := 0; j < 1000; j++ {
cache.Get("user:1")
}
}()
}
wg.Wait()
fmt.Println("All reads completed")
}
For read-heavy workloads, RWMutex provides substantial performance improvements. The concurrent readers don’t block each other, only writers block everyone. However, if your workload has frequent writes, the overhead of RWMutex can actually make it slower than a regular Mutex. Benchmark your specific use case.
Common Pitfalls and Anti-patterns
Mutexes must never be copied after first use. The lock state is part of the mutex value, so copying creates a separate, independent lock that doesn’t protect the same critical section:
type Counter struct {
mu sync.Mutex
value int
}
// BUG: Copies the mutex!
func (c Counter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.value++
}
// CORRECT: Use pointer receiver
func (c *Counter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.value++
}
The go vet tool catches this mistake:
$ go vet .
./main.go:10:6: Increment passes lock by value: Counter contains sync.Mutex
Always use pointer receivers for methods on types containing mutexes. Similarly, never pass mutex-containing structs by value to functions.
Unlocking an already unlocked mutex causes a runtime panic. This typically happens when you unlock manually without defer and hit an early return path:
func (s *Store) BadUpdate(key string) error {
s.mu.Lock()
if key == "" {
return errors.New("empty key") // Forgot to unlock!
}
s.data[key] = "value"
s.mu.Unlock()
return nil
}
The defer pattern prevents this entire class of bugs.
Real-World Application: Thread-Safe Counter
Let’s build a practical statistics counter that tracks multiple metrics safely:
package main
import (
"fmt"
"sync"
"testing"
"time"
)
type StatsCounter struct {
mu sync.RWMutex
counts map[string]int64
}
func NewStatsCounter() *StatsCounter {
return &StatsCounter{
counts: make(map[string]int64),
}
}
func (s *StatsCounter) Increment(key string) {
s.mu.Lock()
defer s.mu.Unlock()
s.counts[key]++
}
func (s *StatsCounter) Add(key string, delta int64) {
s.mu.Lock()
defer s.mu.Unlock()
s.counts[key] += delta
}
func (s *StatsCounter) Get(key string) int64 {
s.mu.RLock()
defer s.mu.RUnlock()
return s.counts[key]
}
func (s *StatsCounter) GetAll() map[string]int64 {
s.mu.RLock()
defer s.mu.RUnlock()
// Return a copy to prevent external modification
result := make(map[string]int64, len(s.counts))
for k, v := range s.counts {
result[k] = v
}
return result
}
func main() {
stats := NewStatsCounter()
var wg sync.WaitGroup
// Simulate concurrent requests
for i := 0; i < 100; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
for j := 0; j < 1000; j++ {
stats.Increment("requests")
if id%10 == 0 {
stats.Increment("errors")
}
}
}(i)
}
wg.Wait()
all := stats.GetAll()
fmt.Printf("Requests: %d\n", all["requests"])
fmt.Printf("Errors: %d\n", all["errors"])
}
Here’s a benchmark comparing Mutex vs RWMutex for different read/write ratios:
func BenchmarkMutexReadHeavy(b *testing.B) {
type Counter struct {
mu sync.Mutex
value int64
}
c := &Counter{}
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
// 90% reads, 10% writes
if time.Now().UnixNano()%10 == 0 {
c.mu.Lock()
c.value++
c.mu.Unlock()
} else {
c.mu.Lock()
_ = c.value
c.mu.Unlock()
}
}
})
}
func BenchmarkRWMutexReadHeavy(b *testing.B) {
type Counter struct {
mu sync.RWMutex
value int64
}
c := &Counter{}
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
if time.Now().UnixNano()%10 == 0 {
c.mu.Lock()
c.value++
c.mu.Unlock()
} else {
c.mu.RLock()
_ = c.value
c.mu.RUnlock()
}
}
})
}
On my machine, RWMutex is 4-5x faster for this 90% read scenario. Your mileage will vary based on workload characteristics and hardware.
Mutexes are fundamental to writing correct concurrent Go programs. Master the basics—protect shared state, use defer for unlocking, minimize critical sections—and you’ll avoid the vast majority of concurrency bugs. When reads dominate, reach for RWMutex. Always validate with the race detector and benchmarks for your specific use case.