Rust Concurrency: Threads and Message Passing

Rust delivers on its promise of 'fearless concurrency' by leveraging the same ownership and borrowing rules that prevent memory safety bugs. The compiler won't let you write code with data...

Key Insights

  • Rust’s ownership system eliminates data races at compile time, making concurrent programming safer than in languages like C++ or Java where race conditions are runtime bugs.
  • Message passing via channels (mpsc) is Rust’s preferred concurrency pattern, following the mantra “share memory by communicating” rather than “communicate by sharing memory.”
  • When shared state is necessary, Arc<Mutex<T>> provides thread-safe access, but channels often lead to cleaner, more maintainable code with better separation of concerns.

Introduction to Concurrency in Rust

Rust delivers on its promise of “fearless concurrency” by leveraging the same ownership and borrowing rules that prevent memory safety bugs. The compiler won’t let you write code with data races—period. If two threads could access the same memory simultaneously with at least one performing a write, the code won’t compile.

This is revolutionary. In most languages, data races are runtime bugs that appear sporadically under load, making them nightmarish to debug. Rust catches these issues before your code ever runs. The type system enforces that data is either immutable and shared across threads, or mutable and owned by a single thread.

This article focuses on Rust’s two primary concurrency primitives: threads with message passing via channels, and shared state using Mutex and Arc. You’ll learn when to use each approach and see practical patterns for building concurrent systems.

Creating and Managing Threads

Spawning threads in Rust is straightforward with std::thread::spawn. This function takes a closure and executes it on a new OS thread. To wait for a thread to complete, call join() on the returned JoinHandle.

use std::thread;
use std::time::Duration;

fn main() {
    let handle = thread::spawn(|| {
        for i in 1..10 {
            println!("spawned thread: {}", i);
            thread::sleep(Duration::from_millis(1));
        }
    });

    for i in 1..5 {
        println!("main thread: {}", i);
        thread::sleep(Duration::from_millis(1));
    }

    handle.join().unwrap();
}

The join() call blocks until the spawned thread completes. Without it, the main thread might exit before the spawned thread finishes, terminating the entire process.

To use data from the enclosing scope, you must move ownership into the thread’s closure. Rust won’t let you borrow data that might outlive the thread:

use std::thread;

fn main() {
    let data = vec![1, 2, 3, 4, 5];
    
    let handle = thread::spawn(move || {
        println!("Vector from another thread: {:?}", data);
        data.iter().sum::<i32>()
    });

    let sum = handle.join().unwrap();
    println!("Sum: {}", sum);
    // data is no longer accessible here—ownership was moved
}

The move keyword forces the closure to take ownership of captured variables. This prevents dangling references and ensures thread safety at compile time.

Message Passing with Channels

Channels provide a way for threads to communicate by sending messages. Rust’s standard library includes mpsc (multiple producer, single consumer) channels in std::sync::mpsc.

The basic pattern creates a channel, moves the sender into a thread, and receives messages on the main thread:

use std::sync::mpsc;
use std::thread;
use std::time::Duration;

fn main() {
    let (tx, rx) = mpsc::channel();

    thread::spawn(move || {
        let messages = vec![
            "hello",
            "from",
            "the",
            "thread",
        ];

        for msg in messages {
            tx.send(msg).unwrap();
            thread::sleep(Duration::from_millis(100));
        }
    });

    for received in rx {
        println!("Received: {}", received);
    }
}

The receiver blocks waiting for messages. When the sender is dropped (when the thread ends), the channel closes and the loop terminates.

Multiple producers can send to a single receiver by cloning the sender:

use std::sync::mpsc;
use std::thread;

fn main() {
    let (tx, rx) = mpsc::channel();

    for id in 0..3 {
        let thread_tx = tx.clone();
        thread::spawn(move || {
            thread_tx.send(format!("Message from thread {}", id)).unwrap();
        });
    }
    
    drop(tx); // Drop original sender so channel closes when threads finish

    for received in rx {
        println!("{}", received);
    }
}

Dropping the original sender is crucial—otherwise, the receiver will wait forever since one sender still exists.

Shared State Concurrency

Sometimes message passing isn’t the right fit. When multiple threads need to read and modify the same data structure, you need shared state. Rust provides Mutex<T> for mutual exclusion and Arc<T> (atomic reference counting) for sharing ownership across threads.

Arc is like Rc but thread-safe. Mutex ensures only one thread can access the data at a time:

use std::sync::{Arc, Mutex};
use std::thread;

fn main() {
    let counter = Arc::new(Mutex::new(0));
    let mut handles = vec![];

    for _ in 0..10 {
        let counter = Arc::clone(&counter);
        let handle = thread::spawn(move || {
            let mut num = counter.lock().unwrap();
            *num += 1;
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    println!("Result: {}", *counter.lock().unwrap());
}

Each thread gets a cloned Arc pointing to the same Mutex. The lock() method blocks until it can acquire the lock, returning a MutexGuard that dereferences to the inner value. The lock is automatically released when the guard goes out of scope.

Deadlocks are still possible with multiple mutexes. Always acquire locks in a consistent order:

use std::sync::{Arc, Mutex};
use std::thread;

fn main() {
    let resource_a = Arc::new(Mutex::new(0));
    let resource_b = Arc::new(Mutex::new(0));

    let res_a = Arc::clone(&resource_a);
    let res_b = Arc::clone(&resource_b);
    
    let handle = thread::spawn(move || {
        // Always lock in A -> B order
        let mut a = res_a.lock().unwrap();
        let mut b = res_b.lock().unwrap();
        *a += 1;
        *b += 1;
    });

    // Same order in main thread
    let mut a = resource_a.lock().unwrap();
    let mut b = resource_b.lock().unwrap();
    *a += 1;
    *b += 1;

    handle.join().unwrap();
}

Choosing Between Message Passing and Shared State

The Go language popularized the phrase “share memory by communicating” rather than “communicate by sharing memory.” Rust embraces this philosophy. Channels should be your default choice because they:

  • Enforce clear ownership boundaries
  • Make data flow explicit
  • Reduce coupling between threads
  • Eliminate entire classes of bugs (no forgotten locks, no deadlocks)

Use shared state when:

  • Performance is critical and message copying is too expensive
  • You need random access to a shared data structure
  • The access pattern doesn’t fit a producer/consumer model

Here’s a counter example refactored from shared state to channels:

use std::sync::mpsc;
use std::thread;

enum Message {
    Increment,
    GetValue(mpsc::Sender<i32>),
}

fn main() {
    let (tx, rx) = mpsc::channel();

    // Counter thread owns the state
    thread::spawn(move || {
        let mut count = 0;
        for msg in rx {
            match msg {
                Message::Increment => count += 1,
                Message::GetValue(resp_tx) => {
                    resp_tx.send(count).unwrap();
                }
            }
        }
    });

    // Worker threads send increment messages
    let mut handles = vec![];
    for _ in 0..10 {
        let tx = tx.clone();
        let handle = thread::spawn(move || {
            tx.send(Message::Increment).unwrap();
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    // Get final value
    let (resp_tx, resp_rx) = mpsc::channel();
    tx.send(Message::GetValue(resp_tx)).unwrap();
    println!("Result: {}", resp_rx.recv().unwrap());
}

This actor-like pattern centralizes state ownership in one thread, eliminating the need for locks entirely.

Common Patterns and Best Practices

Thread pools avoid the overhead of spawning threads for every task. Here’s a simple worker pool pattern:

use std::sync::{mpsc, Arc, Mutex};
use std::thread;

type Job = Box<dyn FnOnce() + Send + 'static>;

fn main() {
    let (tx, rx) = mpsc::channel::<Job>();
    let rx = Arc::new(Mutex::new(rx));
    let mut workers = vec![];

    // Spawn worker threads
    for id in 0..4 {
        let rx = Arc::clone(&rx);
        let worker = thread::spawn(move || {
            loop {
                let job = rx.lock().unwrap().recv();
                match job {
                    Ok(job) => {
                        println!("Worker {} executing job", id);
                        job();
                    }
                    Err(_) => break, // Channel closed
                }
            }
        });
        workers.push(worker);
    }

    // Send jobs
    for i in 0..10 {
        tx.send(Box::new(move || {
            println!("Job {} running", i);
        })).unwrap();
    }

    drop(tx); // Close channel

    for worker in workers {
        worker.join().unwrap();
    }
}

Key practices:

  • Handle errors: recv() and send() return Result. In production, don’t blindly unwrap().
  • Avoid blocking operations in locks: Keep critical sections small. Do heavy computation outside the lock.
  • Consider crossbeam: For advanced use cases, the crossbeam crate provides better channels and scoped threads.
  • Profile before optimizing: Channels are fast. Don’t prematurely optimize to shared state.

Rust’s concurrency model shifts bugs from runtime to compile time. Embrace the compiler’s guidance—if it complains about thread safety, there’s a real issue. The result is concurrent code you can actually reason about and maintain.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.