Goroutines and Channels: Go Concurrency Model
Most programming languages treat concurrency as an afterthought—bolted-on threading libraries with mutexes and condition variables that developers must carefully orchestrate. Go took a different...
Key Insights
- Goroutines are not threads—they’re multiplexed onto OS threads by Go’s runtime scheduler, giving you lightweight concurrency with ~2KB initial stack size versus ~1MB for typical OS threads.
- Channels enforce structured communication between goroutines, making concurrent code easier to reason about than shared memory with locks.
- The
selectstatement is your multiplexer for channel operations, enabling timeouts, cancellation, and handling multiple concurrent data streams elegantly.
Why Go’s Concurrency Model Matters
Most programming languages treat concurrency as an afterthought—bolted-on threading libraries with mutexes and condition variables that developers must carefully orchestrate. Go took a different approach by building concurrency into the language itself, drawing from Tony Hoare’s 1978 paper on Communicating Sequential Processes (CSP).
The core philosophy is simple: don’t communicate by sharing memory; share memory by communicating. Instead of multiple threads fighting over shared data protected by locks, goroutines pass data through channels. This shift in mental model eliminates entire categories of bugs.
Before diving in, let’s clarify terminology. Concurrency is about dealing with multiple things at once—structuring your program to handle many tasks. Parallelism is about doing multiple things at once—actual simultaneous execution. Go gives you concurrency primitives; whether they run in parallel depends on your hardware and runtime configuration.
Goroutines: Lightweight Concurrent Execution
A goroutine is a function executing concurrently with other goroutines in the same address space. You launch one by prefixing a function call with the go keyword:
package main
import (
"fmt"
"time"
)
func sayHello(name string) {
fmt.Printf("Hello, %s!\n", name)
}
func main() {
go sayHello("Alice")
go sayHello("Bob")
// Without this, main exits before goroutines complete
time.Sleep(100 * time.Millisecond)
}
That time.Sleep is a code smell we’ll fix later, but it illustrates an important point: the main goroutine doesn’t wait for others to finish.
Goroutines are remarkably cheap. Each starts with roughly 2KB of stack space that grows and shrinks as needed. Compare this to OS threads, which typically allocate 1-8MB of fixed stack space. You can spawn hundreds of thousands of goroutines on a modest machine.
The Go runtime uses M:N scheduling—it multiplexes M goroutines onto N OS threads. The scheduler handles this transparently, parking goroutines that are blocked on I/O or channel operations and running others. You don’t manage thread pools or worry about thread exhaustion.
func main() {
for i := 0; i < 10000; i++ {
go func(id int) {
// Each goroutine does some work
result := id * id
_ = result
}(i)
}
time.Sleep(time.Second)
fmt.Println("Spawned 10,000 goroutines")
}
Notice the function parameter id int and the argument i. This is critical—without it, all goroutines would capture the loop variable by reference and likely all see the final value of i.
Channels: Safe Communication Between Goroutines
Channels are typed conduits through which you send and receive values. They provide synchronization without explicit locks.
func main() {
// Create an unbuffered channel of integers
ch := make(chan int)
go func() {
ch <- 42 // Send blocks until someone receives
}()
value := <-ch // Receive blocks until someone sends
fmt.Println(value)
}
Unbuffered channels synchronize sender and receiver—the send operation blocks until another goroutine receives, and vice versa. This creates a rendezvous point.
Buffered channels have capacity:
func main() {
ch := make(chan string, 3) // Buffer size of 3
ch <- "first" // Doesn't block
ch <- "second" // Doesn't block
ch <- "third" // Doesn't block
// ch <- "fourth" would block until space is available
fmt.Println(<-ch) // "first"
fmt.Println(<-ch) // "second"
fmt.Println(<-ch) // "third"
}
Buffered channels decouple send and receive timing. Sends only block when the buffer is full; receives only block when it’s empty. Use them when you know the expected throughput or want to handle bursts.
Always close channels from the sender side when no more values will be sent:
func producer(ch chan<- int) {
for i := 0; i < 5; i++ {
ch <- i
}
close(ch)
}
func main() {
ch := make(chan int)
go producer(ch)
for value := range ch {
fmt.Println(value)
}
// Loop exits when channel is closed
}
The chan<- syntax indicates a send-only channel, and <-chan indicates receive-only. Use these in function signatures to clarify intent and catch mistakes at compile time.
Common Concurrency Patterns
The worker pool pattern distributes tasks across a fixed number of goroutines:
func worker(id int, jobs <-chan int, results chan<- int) {
for job := range jobs {
// Simulate work
time.Sleep(100 * time.Millisecond)
results <- job * 2
}
}
func main() {
const numJobs = 20
const numWorkers = 5
jobs := make(chan int, numJobs)
results := make(chan int, numJobs)
// Start workers
for w := 1; w <= numWorkers; w++ {
go worker(w, jobs, results)
}
// Send jobs
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs)
// Collect results
for r := 1; r <= numJobs; r++ {
fmt.Println(<-results)
}
}
This pattern bounds concurrency—you control exactly how many goroutines process work simultaneously. It’s essential for rate limiting, managing connections, or controlling resource usage.
The fan-out/fan-in pattern distributes work then collects results:
func fanOut(input <-chan int, workers int) []<-chan int {
channels := make([]<-chan int, workers)
for i := 0; i < workers; i++ {
channels[i] = process(input)
}
return channels
}
func fanIn(channels ...<-chan int) <-chan int {
out := make(chan int)
var wg sync.WaitGroup
for _, ch := range channels {
wg.Add(1)
go func(c <-chan int) {
defer wg.Done()
for v := range c {
out <- v
}
}(ch)
}
go func() {
wg.Wait()
close(out)
}()
return out
}
Select Statement: Multiplexing Channel Operations
The select statement lets a goroutine wait on multiple channel operations:
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
go func() {
time.Sleep(100 * time.Millisecond)
ch1 <- "from channel 1"
}()
go func() {
time.Sleep(200 * time.Millisecond)
ch2 <- "from channel 2"
}()
for i := 0; i < 2; i++ {
select {
case msg1 := <-ch1:
fmt.Println(msg1)
case msg2 := <-ch2:
fmt.Println(msg2)
}
}
}
Timeouts are straightforward with select:
func fetchWithTimeout(url string) (string, error) {
result := make(chan string, 1)
go func() {
// Simulate HTTP request
time.Sleep(2 * time.Second)
result <- "response data"
}()
select {
case data := <-result:
return data, nil
case <-time.After(1 * time.Second):
return "", fmt.Errorf("request timed out")
}
}
For non-blocking operations, use default:
select {
case msg := <-ch:
fmt.Println("received:", msg)
default:
fmt.Println("no message available")
}
Avoiding Common Pitfalls
Goroutine leaks happen when goroutines block forever. Always ensure goroutines have an exit path:
func leakyFunction() {
ch := make(chan int)
go func() {
val := <-ch // Blocks forever if nothing sends
fmt.Println(val)
}()
// Function returns, goroutine leaks
}
Use sync.WaitGroup to wait for goroutines properly:
func main() {
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
fmt.Printf("Worker %d done\n", id)
}(i)
}
wg.Wait()
fmt.Println("All workers completed")
}
Detect race conditions with the -race flag:
go run -race main.go
go test -race ./...
This instruments your code to detect concurrent access to shared memory. Run it in CI—races are bugs waiting to corrupt your data.
When to Use What
Use channels when:
- You’re passing ownership of data between goroutines
- You need to coordinate multiple goroutines
- You want to distribute work or collect results
Use sync.Mutex when:
- You’re protecting a shared data structure (like a cache)
- You need simple read/write locking (
sync.RWMutex) - Channel overhead matters (rare, but measure first)
Start simple. A single goroutine with channels often beats a complex mutex-protected structure. Profile before optimizing—Go’s concurrency primitives are fast enough for most workloads.
The Go proverb says it best: “Don’t communicate by sharing memory; share memory by communicating.” When you internalize this, concurrent Go code becomes surprisingly straightforward to write and maintain.