Go Goroutines: Lightweight Concurrency

Goroutines are Go's fundamental concurrency primitive—lightweight threads managed entirely by the Go runtime rather than the operating system. When you launch a goroutine with the `go` keyword,...

Key Insights

  • Goroutines are user-space threads managed by Go’s runtime scheduler, allowing you to spawn thousands or millions of concurrent tasks with minimal memory overhead (2KB initial stack vs 1-2MB for OS threads)
  • Channels provide type-safe communication between goroutines following the principle “don’t communicate by sharing memory; share memory by communicating”
  • Context-based cancellation and proper channel closure are essential to prevent goroutine leaks, which silently consume memory and resources until your application crashes

Introduction to Goroutines

Goroutines are Go’s fundamental concurrency primitive—lightweight threads managed entirely by the Go runtime rather than the operating system. When you launch a goroutine with the go keyword, you’re creating a function that executes concurrently with other goroutines, all multiplexed onto a smaller number of OS threads by Go’s scheduler.

The key difference from traditional threading models is efficiency. An OS thread typically requires 1-2MB of stack space and involves expensive context switching through the kernel. Goroutines start with just 2KB of stack space that grows dynamically, and the Go scheduler handles context switching in user space using a work-stealing algorithm. This means you can realistically run hundreds of thousands of goroutines on modest hardware.

Here’s the simplest goroutine example:

package main

import (
    "fmt"
    "time"
)

func sayHello(name string) {
    fmt.Printf("Hello, %s!\n", name)
}

func main() {
    go sayHello("World")
    
    // Sleep to prevent main from exiting before goroutine runs
    time.Sleep(100 * time.Millisecond)
}

The go keyword transforms any function call into a concurrent operation. However, this naive example reveals a critical issue: if main exits, all goroutines are terminated regardless of their state.

Creating and Managing Goroutines

The Go scheduler uses an M:N model, multiplexing M goroutines across N OS threads. When a goroutine blocks on I/O, the scheduler moves other runnable goroutines onto available threads, maximizing CPU utilization without the overhead of creating new OS threads.

Proper goroutine management requires coordination. The sync.WaitGroup is the standard tool for waiting on multiple goroutines to complete:

package main

import (
    "fmt"
    "sync"
    "time"
)

func worker(id int, wg *sync.WaitGroup) {
    defer wg.Done() // Signal completion when function returns
    
    fmt.Printf("Worker %d starting\n", id)
    time.Sleep(time.Second)
    fmt.Printf("Worker %d done\n", id)
}

func main() {
    var wg sync.WaitGroup
    
    for i := 1; i <= 5; i++ {
        wg.Add(1) // Increment counter before launching goroutine
        go worker(i, &wg)
    }
    
    wg.Wait() // Block until all goroutines call Done()
    fmt.Println("All workers completed")
}

Always call wg.Add(1) before launching the goroutine, not inside it. This prevents race conditions where wg.Wait() might be called before all Add() calls complete.

Communication Between Goroutines

Channels are Go’s primary mechanism for goroutine communication. They’re typed, thread-safe conduits that enforce synchronization. An unbuffered channel blocks the sender until a receiver is ready, providing natural synchronization. Buffered channels allow sending without blocking until the buffer fills.

Here’s a producer-consumer pattern demonstrating both:

package main

import (
    "fmt"
    "time"
)

// Unbuffered channel - sender blocks until receiver reads
func unbufferedExample() {
    ch := make(chan int)
    
    go func() {
        for i := 0; i < 3; i++ {
            fmt.Printf("Sending %d\n", i)
            ch <- i // Blocks until main receives
        }
        close(ch)
    }()
    
    for num := range ch {
        fmt.Printf("Received %d\n", num)
        time.Sleep(100 * time.Millisecond)
    }
}

// Buffered channel - sender can send without blocking until buffer is full
func bufferedExample() {
    ch := make(chan int, 3)
    
    go func() {
        for i := 0; i < 3; i++ {
            fmt.Printf("Sending %d\n", i)
            ch <- i // Won't block for first 3 sends
        }
        close(ch)
    }()
    
    time.Sleep(200 * time.Millisecond)
    for num := range ch {
        fmt.Printf("Received %d\n", num)
    }
}

Always close channels from the sender side, never the receiver. Closing signals that no more values will be sent, allowing receivers to exit range loops gracefully.

Common Concurrency Patterns

The worker pool pattern is essential for controlling concurrency and resource usage:

package main

import (
    "fmt"
    "sync"
)

type Job struct {
    ID    int
    Value int
}

type Result struct {
    Job   Job
    Sum   int
}

func worker(id int, jobs <-chan Job, results chan<- Result, wg *sync.WaitGroup) {
    defer wg.Done()
    for job := range jobs {
        // Simulate work
        sum := 0
        for i := 0; i <= job.Value; i++ {
            sum += i
        }
        results <- Result{Job: job, Sum: sum}
    }
}

func main() {
    const numWorkers = 3
    jobs := make(chan Job, 10)
    results := make(chan Result, 10)
    
    var wg sync.WaitGroup
    
    // Start workers
    for i := 1; i <= numWorkers; i++ {
        wg.Add(1)
        go worker(i, jobs, results, &wg)
    }
    
    // Send jobs
    go func() {
        for i := 1; i <= 9; i++ {
            jobs <- Job{ID: i, Value: i * 100}
        }
        close(jobs)
    }()
    
    // Close results when all workers finish
    go func() {
        wg.Wait()
        close(results)
    }()
    
    // Collect results
    for result := range results {
        fmt.Printf("Job %d result: %d\n", result.Job.ID, result.Sum)
    }
}

The select statement enables powerful timeout and cancellation patterns:

func fetchWithTimeout(url string) (string, error) {
    result := make(chan string, 1)
    
    go func() {
        // Simulate fetch
        time.Sleep(2 * time.Second)
        result <- "data"
    }()
    
    select {
    case data := <-result:
        return data, nil
    case <-time.After(1 * time.Second):
        return "", fmt.Errorf("timeout")
    }
}

Synchronization Primitives

While channels are idiomatic, mutexes are sometimes more appropriate for protecting shared state:

package main

import (
    "fmt"
    "sync"
)

type Counter struct {
    mu    sync.Mutex
    value int
}

func (c *Counter) Increment() {
    c.mu.Lock()
    c.value++
    c.mu.Unlock()
}

func (c *Counter) Value() int {
    c.mu.Lock()
    defer c.mu.Unlock()
    return c.value
}

func main() {
    counter := &Counter{}
    var wg sync.WaitGroup
    
    for i := 0; i < 1000; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            counter.Increment()
        }()
    }
    
    wg.Wait()
    fmt.Printf("Final count: %d\n", counter.Value())
}

Use channels for passing ownership of data or distributing work. Use mutexes for protecting shared state accessed by multiple goroutines. sync.RWMutex allows multiple concurrent readers but exclusive writers, improving performance for read-heavy workloads.

Goroutine Leaks and Best Practices

Goroutine leaks occur when goroutines block indefinitely, unable to exit. This commonly happens with channels that are never closed or read from:

// LEAK: Goroutine blocks forever
func leakyFunction() {
    ch := make(chan int)
    go func() {
        val := <-ch // Blocks forever - nobody sends
        fmt.Println(val)
    }()
    // Function returns, channel never receives data
}

// FIX: Use context for cancellation
func fixedFunction(ctx context.Context) {
    ch := make(chan int)
    go func() {
        select {
        case val := <-ch:
            fmt.Println(val)
        case <-ctx.Done():
            return // Clean exit when context cancelled
        }
    }()
}

The context package is the standard way to propagate cancellation signals:

func main() {
    ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
    defer cancel()
    
    result := make(chan string, 1)
    
    go func() {
        time.Sleep(10 * time.Second) // Simulated long operation
        result <- "done"
    }()
    
    select {
    case <-ctx.Done():
        fmt.Println("Operation cancelled:", ctx.Err())
    case res := <-result:
        fmt.Println(res)
    }
}

Always ask: how does this goroutine exit? Every goroutine should have a clear termination condition, whether through channel closure, context cancellation, or completing its work.

Performance Considerations

Goroutines aren’t free. Each consumes memory and adds scheduling overhead. For CPU-bound tasks, limit concurrency to runtime.NumCPU(). For I/O-bound tasks, you can scale much higher:

package main

import (
    "testing"
    "time"
)

func simulateIO() {
    time.Sleep(10 * time.Millisecond)
}

func BenchmarkSequential(b *testing.B) {
    for i := 0; i < b.N; i++ {
        for j := 0; j < 100; j++ {
            simulateIO()
        }
    }
}

func BenchmarkConcurrent(b *testing.B) {
    for i := 0; i < b.N; i++ {
        var wg sync.WaitGroup
        for j := 0; j < 100; j++ {
            wg.Add(1)
            go func() {
                defer wg.Done()
                simulateIO()
            }()
        }
        wg.Wait()
    }
}

Use go test -bench=. -benchmem to measure actual performance. Profile with pprof to identify goroutine leaks and contention. The runtime provides runtime.NumGoroutine() to track active goroutines during development.

Goroutines make concurrency accessible, but they require discipline. Structure your concurrent code with clear ownership, explicit cancellation paths, and proper synchronization. When in doubt, prefer channels for communication and reserve mutexes for protecting localized state. The simplicity of go keyword belies the complexity it can create—respect that power.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.