Go Buffered vs Unbuffered Channels
Channels are Go's built-in mechanism for safe communication between goroutines. Unlike shared memory with locks, channels provide a higher-level abstraction that follows the Go proverb: 'Don't...
Key Insights
- Unbuffered channels provide synchronous communication with guaranteed delivery—the sender blocks until a receiver is ready, making them ideal for strict coordination between goroutines
- Buffered channels decouple senders from receivers up to their capacity limit, improving throughput in producer-consumer patterns but requiring careful sizing to avoid memory issues or false performance assumptions
- Choose unbuffered channels by default for explicit synchronization points; reach for buffered channels only when you need to handle bursts, implement worker pools, or have measured performance bottlenecks
Understanding Go Channels
Channels are Go’s built-in mechanism for safe communication between goroutines. Unlike shared memory with locks, channels provide a higher-level abstraction that follows the Go proverb: “Don’t communicate by sharing memory; share memory by communicating.”
Go offers two channel types: unbuffered and buffered. The difference isn’t just about capacity—it fundamentally changes how goroutines coordinate and when blocking occurs. Understanding this distinction is critical for writing correct, performant concurrent code.
Unbuffered Channels: Synchronous Handoff
An unbuffered channel has zero capacity. Every send operation blocks until another goroutine receives from that channel, and every receive blocks until another goroutine sends. This creates a synchronization point—a rendezvous between sender and receiver.
package main
import (
"fmt"
"time"
)
func main() {
ch := make(chan string) // unbuffered channel
go func() {
fmt.Println("Goroutine: about to send")
ch <- "message" // blocks here until main receives
fmt.Println("Goroutine: send completed")
}()
time.Sleep(2 * time.Second) // simulate work
fmt.Println("Main: about to receive")
msg := <-ch
fmt.Println("Main: received", msg)
}
Output:
Goroutine: about to send
Main: about to receive
Main: received message
Goroutine: send completed
The goroutine blocks on the send for 2 seconds until main is ready to receive. This blocking behavior guarantees that when the send completes, you know the receiver has the data.
Unbuffered channels will deadlock if there’s no receiver:
func main() {
ch := make(chan int)
ch <- 42 // fatal error: all goroutines are asleep - deadlock!
}
This synchronous nature makes unbuffered channels perfect for scenarios requiring explicit coordination, like signaling completion or ensuring ordered operations.
Buffered Channels: Decoupled Communication
A buffered channel has capacity for a specific number of elements. Sends only block when the buffer is full; receives only block when the buffer is empty.
package main
import "fmt"
func main() {
ch := make(chan int, 3) // buffered channel with capacity 3
// These sends don't block because buffer isn't full
ch <- 1
ch <- 2
ch <- 3
fmt.Println("Sent 3 values without blocking")
// This would block since buffer is full
// ch <- 4
// Receiving makes space in the buffer
fmt.Println(<-ch) // 1
fmt.Println(<-ch) // 2
// Now we can send again
ch <- 4
fmt.Println(<-ch) // 3
fmt.Println(<-ch) // 4
}
The buffer decouples producers from consumers. A fast producer can fill the buffer and continue without waiting for a slow consumer, up to the capacity limit. This improves throughput in scenarios with bursty traffic or mismatched producer-consumer speeds.
Performance and Behavior Differences
The performance characteristics differ significantly:
package main
import (
"fmt"
"sync"
"time"
)
func benchmarkChannel(buffered bool, bufSize int, messages int) time.Duration {
var ch chan int
if buffered {
ch = make(chan int, bufSize)
} else {
ch = make(chan int)
}
start := time.Now()
var wg sync.WaitGroup
wg.Add(2)
// Producer
go func() {
defer wg.Done()
for i := 0; i < messages; i++ {
ch <- i
}
close(ch)
}()
// Consumer
go func() {
defer wg.Done()
for range ch {
// Simulate minimal work
}
}()
wg.Wait()
return time.Since(start)
}
func main() {
messages := 100000
unbuffered := benchmarkChannel(false, 0, messages)
buffered10 := benchmarkChannel(true, 10, messages)
buffered100 := benchmarkChannel(true, 100, messages)
buffered1000 := benchmarkChannel(true, 1000, messages)
fmt.Printf("Unbuffered: %v\n", unbuffered)
fmt.Printf("Buffered (10): %v\n", buffered10)
fmt.Printf("Buffered (100): %v\n", buffered100)
fmt.Printf("Buffered (1000): %v\n", buffered1000)
}
Typical results show buffered channels can be 2-3x faster for high-throughput scenarios, but the gains plateau quickly. A buffer of 10-100 often provides most benefits; larger buffers increase memory usage without proportional performance gains.
Memory-wise, buffered channels allocate space for the buffer upfront. An int channel with capacity 1000 uses approximately 8KB. This matters when creating many channels or using large buffers.
Practical Use Cases and Patterns
Unbuffered channels excel at:
- Synchronization points: Ensuring goroutine A completes before B continues
- Request-response patterns: Guaranteeing the receiver got your message
- State transitions: Coordinating complex multi-goroutine workflows
Buffered channels shine in:
- Worker pools: Queuing tasks for processing
- Rate limiting: Controlling concurrent operations
- Burst handling: Smoothing traffic spikes
Here’s a worker pool using a buffered channel for the job queue:
package main
import (
"fmt"
"sync"
"time"
)
type Job struct {
ID int
}
func worker(id int, jobs <-chan Job, wg *sync.WaitGroup) {
defer wg.Done()
for job := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, job.ID)
time.Sleep(100 * time.Millisecond) // simulate work
}
}
func main() {
const numWorkers = 3
const numJobs = 10
// Buffered channel allows queueing jobs without blocking
jobs := make(chan Job, numJobs)
var wg sync.WaitGroup
// Start workers
for i := 1; i <= numWorkers; i++ {
wg.Add(1)
go worker(i, jobs, &wg)
}
// Queue all jobs without blocking
for i := 1; i <= numJobs; i++ {
jobs <- Job{ID: i}
}
close(jobs)
wg.Wait()
fmt.Println("All jobs completed")
}
The buffered channel lets us queue all jobs immediately. Workers pull from the queue at their own pace. With an unbuffered channel, we’d block after sending the first job until a worker received it.
Pitfalls and Best Practices
Goroutine leaks are the most common channel mistake. If a goroutine sends to a channel that nobody reads from, it leaks:
// BAD: Goroutine leak
func processData(data []int) <-chan int {
results := make(chan int)
go func() {
for _, v := range data {
results <- v * 2 // If caller stops reading, this leaks
}
}()
return results
}
Fix this with a done channel or context:
// GOOD: Goroutine can be cancelled
func processData(ctx context.Context, data []int) <-chan int {
results := make(chan int)
go func() {
defer close(results)
for _, v := range data {
select {
case results <- v * 2:
case <-ctx.Done():
return
}
}
}()
return results
}
Buffer sizing mistakes create false performance improvements. A buffer of 1000 might seem fast during testing but causes memory issues in production. Size buffers based on actual burst characteristics, not arbitrary large numbers.
Guidelines for choosing:
- Start with unbuffered. They’re simpler and make synchronization explicit.
- Add buffering when you have a concrete reason: measured performance bottleneck, known burst pattern, or producer-consumer speed mismatch.
- Keep buffers small. Values like 1, 10, or 100 are usually sufficient. If you need more, reconsider your design.
- Never use buffered channels just to avoid thinking about synchronization. That’s hiding bugs, not fixing them.
Making the Right Choice
Unbuffered channels are the conservative, correct default. They force you to think about synchronization and make coordination explicit in your code. The blocking behavior prevents subtle race conditions and makes program flow easier to reason about.
Buffered channels are an optimization for specific scenarios. They trade simplicity for performance and require more careful reasoning about program state. Use them when profiling shows unbuffered channels are a bottleneck, or when your problem naturally has a bounded queue (like a worker pool with known capacity).
The capacity of a buffered channel is part of your API contract. It affects behavior, performance, and memory usage. Choose deliberately, document your reasoning, and resist the temptation to use large buffers as a band-aid for architectural problems.
Remember: channels are about communication and coordination, not just data transfer. Choose the channel type that best expresses your synchronization requirements, and your concurrent code will be more correct and maintainable.