Go sync.Map: Concurrent-Safe Maps
• Go's built-in maps panic when accessed concurrently without synchronization, making `sync.Map` essential for concurrent scenarios where multiple goroutines need shared map access
Key Insights
• Go’s built-in maps panic when accessed concurrently without synchronization, making sync.Map essential for concurrent scenarios where multiple goroutines need shared map access
• sync.Map excels in two specific patterns: entries written once but read many times, and disjoint key sets accessed by different goroutines—otherwise, a regular map with RWMutex often performs better
• The lack of type safety and compile-time guarantees means you’ll need type assertions for every retrieval, making type-safe wrappers a practical necessity for production code
The Concurrency Problem with Regular Maps
Go’s built-in maps are not safe for concurrent access. If multiple goroutines read and write to the same map simultaneously, you’ll encounter race conditions that lead to crashes or data corruption. The Go runtime actively detects concurrent map access and will panic with a “concurrent map read and map write” error.
Here’s what happens when you naively share a map across goroutines:
package main
import (
"fmt"
"sync"
)
func main() {
m := make(map[int]int)
var wg sync.WaitGroup
// Spawn 10 goroutines writing to the map
for i := 0; i < 10; i++ {
wg.Add(1)
go func(n int) {
defer wg.Done()
for j := 0; j < 1000; j++ {
m[n] = j // RACE CONDITION - will panic
}
}(i)
}
wg.Wait()
fmt.Println("Completed:", len(m))
}
Run this with go run -race and you’ll see immediate failures. The solution is either protecting a regular map with a mutex or using sync.Map when the access patterns align with its optimization goals.
sync.Map Basics and API Overview
The sync.Map type provides a concurrent-safe map implementation with a straightforward API. Unlike regular maps, you don’t use bracket notation for access. Instead, you work with methods:
Store(key, value interface{})- Set a key-value pairLoad(key interface{})- Retrieve a value, returns (value, ok)Delete(key interface{})- Remove a keyLoadOrStore(key, value interface{})- Atomic get-or-set operationLoadAndDelete(key interface{})- Atomic get-and-remove operationRange(func(key, value interface{}) bool)- Iterate over entries
Here’s basic usage:
package main
import (
"fmt"
"sync"
)
func main() {
var m sync.Map
// Store values
m.Store("name", "Alice")
m.Store("age", 30)
m.Store("city", "NYC")
// Load values
if val, ok := m.Load("name"); ok {
fmt.Println("Name:", val.(string))
}
// Delete
m.Delete("city")
// LoadOrStore - returns existing value if present
actual, loaded := m.LoadOrStore("name", "Bob")
fmt.Println("Value:", actual, "Was loaded:", loaded) // Alice, true
}
For concurrent access, sync.Map handles all synchronization internally:
func concurrentAccess() {
var m sync.Map
var wg sync.WaitGroup
// Multiple writers
for i := 0; i < 5; i++ {
wg.Add(1)
go func(n int) {
defer wg.Done()
m.Store(n, n*100)
}(i)
}
// Multiple readers
for i := 0; i < 5; i++ {
wg.Add(1)
go func(n int) {
defer wg.Done()
if val, ok := m.Load(n); ok {
fmt.Printf("Key %d = %v\n", n, val)
}
}(i)
}
wg.Wait()
}
When to Use sync.Map vs Mutex-Protected Maps
Don’t default to sync.Map for every concurrent map scenario. The Go documentation explicitly states it’s optimized for two specific use cases:
- Entries written once, read many times - Think configuration data, lookup tables, or caches with rare updates
- Disjoint key sets - Different goroutines work with completely separate keys
For write-heavy workloads or when goroutines frequently access the same keys, a regular map with sync.RWMutex often performs better:
// sync.Map approach
var sm sync.Map
// Mutex-protected map approach
type SafeMap struct {
mu sync.RWMutex
m map[string]int
}
func (s *SafeMap) Store(key string, value int) {
s.mu.Lock()
defer s.mu.Unlock()
s.m[key] = value
}
func (s *SafeMap) Load(key string) (int, bool) {
s.mu.RLock()
defer s.mu.RUnlock()
val, ok := s.m[key]
return val, ok
}
For read-heavy workloads with occasional writes, sync.Map typically wins. For write-heavy scenarios or when you need features like len(), use a mutex-protected map. Benchmark your specific use case—performance characteristics vary significantly based on access patterns.
Common Patterns and Best Practices
Concurrent Cache Pattern
A common use case is building an in-memory cache that multiple goroutines access:
type Cache struct {
data sync.Map
}
func (c *Cache) Get(key string) (interface{}, bool) {
return c.data.Load(key)
}
func (c *Cache) Set(key string, value interface{}) {
c.data.Store(key, value)
}
func (c *Cache) GetOrCompute(key string, compute func() interface{}) interface{} {
if val, ok := c.data.Load(key); ok {
return val
}
newVal := compute()
actual, _ := c.data.LoadOrStore(key, newVal)
return actual
}
Safe Iteration with Range
The Range method provides safe iteration. The function you pass receives each key-value pair and should return true to continue or false to stop:
func printAll(m *sync.Map) {
m.Range(func(key, value interface{}) bool {
fmt.Printf("%v: %v\n", key, value)
return true // continue iteration
})
}
func findFirst(m *sync.Map, predicate func(interface{}) bool) interface{} {
var result interface{}
m.Range(func(key, value interface{}) bool {
if predicate(value) {
result = value
return false // stop iteration
}
return true
})
return result
}
Type-Safe Wrapper
Since sync.Map uses interface{}, you’ll need type assertions everywhere. Create type-safe wrappers for production code:
type StringIntMap struct {
m sync.Map
}
func (s *StringIntMap) Store(key string, value int) {
s.m.Store(key, value)
}
func (s *StringIntMap) Load(key string) (int, bool) {
val, ok := s.m.Load(key)
if !ok {
return 0, false
}
return val.(int), true
}
func (s *StringIntMap) Range(f func(key string, value int) bool) {
s.m.Range(func(key, value interface{}) bool {
return f(key.(string), value.(int))
})
}
Pitfalls and Limitations
No Type Safety
Every retrieval requires type assertion, which can panic if you store the wrong type:
var m sync.Map
m.Store("count", 42)
m.Store("name", "Alice")
// This will panic at runtime
val, _ := m.Load("name")
count := val.(int) // panic: interface conversion
Always use the comma-ok idiom or wrap in type-safe abstractions.
No Len() Method
You cannot get the count of entries without iterating:
func countEntries(m *sync.Map) int {
count := 0
m.Range(func(_, _ interface{}) bool {
count++
return true
})
return count
}
If you need frequent length checks, sync.Map isn’t the right choice.
Write-Heavy Performance
For workloads dominated by writes to the same keys, sync.Map performs worse than a mutex-protected map. The optimization for read-mostly workloads comes at a cost for write-heavy scenarios.
Real-World Use Case: HTTP Request Tracker
Here’s a practical example tracking active HTTP requests with metadata:
package main
import (
"fmt"
"sync"
"time"
)
type RequestInfo struct {
Method string
Path string
StartTime time.Time
}
type RequestTracker struct {
requests sync.Map
}
func (rt *RequestTracker) StartRequest(id string, method, path string) {
rt.requests.Store(id, &RequestInfo{
Method: method,
Path: path,
StartTime: time.Now(),
})
}
func (rt *RequestTracker) EndRequest(id string) {
rt.requests.Delete(id)
}
func (rt *RequestTracker) GetActiveRequests() []*RequestInfo {
var active []*RequestInfo
rt.requests.Range(func(key, value interface{}) bool {
active = append(active, value.(*RequestInfo))
return true
})
return active
}
func (rt *RequestTracker) GetLongRunning(threshold time.Duration) []*RequestInfo {
var longRunning []*RequestInfo
now := time.Now()
rt.requests.Range(func(key, value interface{}) bool {
info := value.(*RequestInfo)
if now.Sub(info.StartTime) > threshold {
longRunning = append(longRunning, info)
}
return true
})
return longRunning
}
func main() {
tracker := &RequestTracker{}
var wg sync.WaitGroup
// Simulate concurrent requests
for i := 0; i < 5; i++ {
wg.Add(1)
go func(n int) {
defer wg.Done()
id := fmt.Sprintf("req-%d", n)
tracker.StartRequest(id, "GET", fmt.Sprintf("/api/resource/%d", n))
time.Sleep(time.Duration(n*100) * time.Millisecond)
tracker.EndRequest(id)
}(i)
}
// Monitor long-running requests
time.Sleep(250 * time.Millisecond)
longRunning := tracker.GetLongRunning(200 * time.Millisecond)
fmt.Printf("Long-running requests: %d\n", len(longRunning))
wg.Wait()
fmt.Printf("Active requests: %d\n", len(tracker.GetActiveRequests()))
}
This demonstrates sync.Map’s strength: multiple goroutines independently writing disjoint keys (different request IDs) while a monitoring goroutine safely reads across all entries. The read-heavy monitoring pattern and write-once-per-request pattern align perfectly with sync.Map’s optimizations.
Use sync.Map when your access patterns match its strengths. For everything else, stick with a mutex-protected map and enjoy type safety and better write performance.