Go Channels: Communication Between Goroutines

Go's concurrency model is built around goroutines and channels. While goroutines provide lightweight concurrent execution, channels solve the critical problem of safe communication between them. The...

Key Insights

  • Channels are Go’s primary mechanism for safe communication between goroutines, embodying the philosophy “don’t communicate by sharing memory; share memory by communicating”
  • Unbuffered channels provide synchronization guarantees by blocking until both sender and receiver are ready, while buffered channels allow asynchronous communication up to their capacity
  • Common pitfalls like deadlocks and goroutine leaks can be avoided by following patterns like using directional channels, properly closing channels, and implementing timeouts with select statements

Introduction to Channels

Go’s concurrency model is built around goroutines and channels. While goroutines provide lightweight concurrent execution, channels solve the critical problem of safe communication between them. The traditional approach of using shared memory with locks is error-prone and leads to race conditions. Go encourages a different philosophy: instead of having multiple goroutines access shared memory with complex locking schemes, use channels to pass data between goroutines.

Consider a simple scenario where multiple goroutines need to send results back to a coordinator:

// Bad: Using shared memory
type Results struct {
    mu sync.Mutex
    data []int
}

func worker(id int, results *Results) {
    value := expensiveComputation(id)
    results.mu.Lock()
    results.data = append(results.data, value)
    results.mu.Unlock()
}

// Good: Using channels
func worker(id int, results chan<- int) {
    value := expensiveComputation(id)
    results <- value
}

The channel-based approach is cleaner, less error-prone, and more idiomatic. The channel handles synchronization automatically, eliminating the need for explicit locks.

Channel Basics and Syntax

Channels are typed conduits through which you send and receive values using the channel operator <-. Creating a channel uses the make function:

// Unbuffered channel
ch := make(chan int)

// Buffered channel with capacity of 5
buffered := make(chan string, 5)

The fundamental operations are sending and receiving:

ch <- 42        // Send 42 to channel ch
value := <-ch   // Receive from ch and assign to value
<-ch            // Receive from ch, discard value

Unbuffered channels block until both sender and receiver are ready. This provides a synchronization guarantee:

func main() {
    ch := make(chan int)
    
    go func() {
        fmt.Println("Goroutine: about to send")
        ch <- 42
        fmt.Println("Goroutine: sent value")
    }()
    
    time.Sleep(2 * time.Second)
    fmt.Println("Main: about to receive")
    value := <-ch
    fmt.Println("Main: received", value)
}

Output shows the goroutine blocks at the send operation until main is ready to receive.

Buffered channels behave differently—sends only block when the buffer is full:

func main() {
    ch := make(chan int, 2)
    
    ch <- 1
    ch <- 2
    // These two sends don't block
    
    fmt.Println(<-ch) // 1
    fmt.Println(<-ch) // 2
    
    // ch <- 3 would block here since buffer is full and no receiver
}

Use buffered channels when you know the expected throughput and want to decouple sender and receiver timing, but be cautious—oversized buffers can hide design problems.

Channel Patterns and Idioms

Channels enable powerful concurrency patterns. The worker pool pattern distributes work across multiple goroutines:

func workerPool(jobs <-chan int, results chan<- int, workers int) {
    var wg sync.WaitGroup
    
    for i := 0; i < workers; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            for job := range jobs {
                results <- process(job)
            }
        }(i)
    }
    
    wg.Wait()
    close(results)
}

func main() {
    jobs := make(chan int, 100)
    results := make(chan int, 100)
    
    go workerPool(jobs, results, 5)
    
    // Send jobs
    for i := 0; i < 50; i++ {
        jobs <- i
    }
    close(jobs)
    
    // Collect results
    for result := range results {
        fmt.Println(result)
    }
}

The pipeline pattern chains multiple processing stages:

func generator(nums ...int) <-chan int {
    out := make(chan int)
    go func() {
        for _, n := range nums {
            out <- n
        }
        close(out)
    }()
    return out
}

func square(in <-chan int) <-chan int {
    out := make(chan int)
    go func() {
        for n := range in {
            out <- n * n
        }
        close(out)
    }()
    return out
}

func main() {
    // Pipeline: generator -> square
    for n := range square(generator(2, 3, 4)) {
        fmt.Println(n) // 4, 9, 16
    }
}

The select statement multiplexes multiple channel operations:

func worker(ctx context.Context, jobs <-chan int) {
    for {
        select {
        case job := <-jobs:
            process(job)
        case <-ctx.Done():
            fmt.Println("Worker cancelled")
            return
        case <-time.After(5 * time.Second):
            fmt.Println("No jobs for 5 seconds")
        }
    }
}

The select statement blocks until one of its cases can proceed, making it perfect for timeouts, cancellation, and handling multiple channels simultaneously.

Directional Channels and Best Practices

Go allows you to specify channel direction in function signatures, making APIs clearer and preventing misuse:

// Producer can only send
func producer(ch chan<- int) {
    for i := 0; i < 10; i++ {
        ch <- i
    }
    close(ch)
}

// Consumer can only receive
func consumer(ch <-chan int) {
    for value := range ch {
        fmt.Println(value)
    }
}

func main() {
    ch := make(chan int)
    go producer(ch)
    consumer(ch)
}

Attempting to receive from a send-only channel or send to a receive-only channel results in a compile-time error.

Closing channels signals that no more values will be sent. Only the sender should close a channel:

func producer(ch chan<- int) {
    defer close(ch)
    for i := 0; i < 5; i++ {
        ch <- i
    }
}

func consumer(ch <-chan int) {
    for value := range ch {
        fmt.Println(value)
    }
    // range automatically exits when channel is closed
}

You can explicitly check if a channel is closed using the comma-ok idiom:

value, ok := <-ch
if !ok {
    fmt.Println("Channel closed")
    return
}

Never close a channel from the receiver side or close a channel multiple times—both cause panics.

Common Pitfalls and Debugging

Deadlocks occur when goroutines are waiting for each other indefinitely. The Go runtime detects some deadlocks:

// Deadlock: no receiver
func main() {
    ch := make(chan int)
    ch <- 42 // fatal error: all goroutines are asleep - deadlock!
}

// Fix: receive in a goroutine
func main() {
    ch := make(chan int)
    go func() {
        fmt.Println(<-ch)
    }()
    ch <- 42
}

Goroutine leaks happen when goroutines block forever on channel operations:

// Leak: goroutine waits forever if no more values sent
func leak() {
    ch := make(chan int)
    go func() {
        for {
            value := <-ch // blocks forever if ch never closed
            process(value)
        }
    }()
    ch <- 1
    // Forgot to close ch, goroutine leaks
}

// Fix: use context for cancellation
func fixed(ctx context.Context) {
    ch := make(chan int)
    go func() {
        for {
            select {
            case value := <-ch:
                process(value)
            case <-ctx.Done():
                return
            }
        }
    }()
    ch <- 1
}

Nil channels block forever on send and receive operations, which can be useful in select statements:

func process(ch1, ch2 <-chan int) {
    for {
        select {
        case v := <-ch1:
            fmt.Println("ch1:", v)
            ch1 = nil // disable this case
        case v := <-ch2:
            fmt.Println("ch2:", v)
        }
    }
}

Use the race detector during testing to catch subtle concurrency bugs:

go test -race ./...

Conclusion

Channels are the cornerstone of Go’s concurrency model, providing a safe and expressive way to communicate between goroutines. Use channels when you need to pass ownership of data, distribute work, or communicate results. They’re ideal for producer-consumer patterns, pipelines, and coordinating concurrent operations.

However, channels aren’t always the right choice. Use mutexes when protecting shared state that multiple goroutines need to access briefly, like incrementing a counter or updating a cache. Channels have overhead—don’t use them for simple synchronization where a sync.WaitGroup or sync.Mutex would suffice.

The key is understanding that channels are about communication and coordination, not just data transfer. When you find yourself fighting with complex locking schemes, step back and consider whether channels could make your design clearer. The Go proverb holds true: share memory by communicating, and your concurrent code will be more maintainable and correct.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.