Rust Mutex and RwLock: Shared State Concurrency

Shared state concurrency is inherently difficult. Multiple threads accessing the same memory simultaneously creates data races, corrupted state, and non-deterministic behavior. Most languages push...

Key Insights

  • Rust’s Mutex<T> and RwLock<T> provide memory-safe shared state concurrency through compile-time enforcement and runtime locking, eliminating data races that plague other languages
  • RwLock<T> can deliver 5-10x better performance than Mutex<T> in read-heavy workloads by allowing multiple concurrent readers, but adds overhead for write-dominated scenarios
  • Lock poisoning, deadlocks, and contention are still your responsibility—Rust prevents memory unsafety but not logical concurrency bugs, so minimize critical sections and establish clear lock ordering

Introduction to Shared State Concurrency in Rust

Shared state concurrency is inherently difficult. Multiple threads accessing the same memory simultaneously creates data races, corrupted state, and non-deterministic behavior. Most languages push these problems to runtime, leaving you to discover them through crashes or subtle bugs in production.

Rust takes a different approach. The ownership system makes sharing mutable state across threads a compile-time error by default. Try to share a mutable reference across threads, and the compiler stops you immediately.

use std::thread;

fn main() {
    let mut counter = 0;
    
    let handle = thread::spawn(|| {
        counter += 1; // ERROR: cannot capture `counter` in a closure
    });
    
    counter += 1; // Multiple mutable references
    handle.join().unwrap();
}

This won’t compile because Rust can’t guarantee that both threads won’t access counter simultaneously. You need explicit synchronization primitives: Mutex<T> for exclusive access and RwLock<T> for reader-writer patterns.

Mutex: Exclusive Access Pattern

A Mutex<T> (mutual exclusion lock) wraps your data and ensures only one thread can access it at a time. When you call lock(), you get a MutexGuard<T> that provides exclusive access. When the guard drops, the lock releases automatically—this RAII pattern prevents forgetting to unlock.

Here’s the counter example done correctly:

use std::sync::{Arc, Mutex};
use std::thread;

fn main() {
    let counter = Arc::new(Mutex::new(0));
    let mut handles = vec![];
    
    for _ in 0..10 {
        let counter = Arc::clone(&counter);
        let handle = thread::spawn(move || {
            let mut num = counter.lock().unwrap();
            *num += 1;
        });
        handles.push(handle);
    }
    
    for handle in handles {
        handle.join().unwrap();
    }
    
    println!("Result: {}", *counter.lock().unwrap());
}

Arc<T> (atomic reference counting) lets multiple threads own the same Mutex<T>. Each thread clones the Arc, incrementing the reference count, then moves its clone into the spawned thread. When all threads complete, the reference count drops to zero and the mutex is deallocated.

The lock() method blocks until it acquires the lock. If a thread panics while holding a lock, the mutex becomes “poisoned” to signal that the protected data might be in an inconsistent state:

use std::sync::{Arc, Mutex};
use std::thread;

fn main() {
    let data = Arc::new(Mutex::new(0));
    let data_clone = Arc::clone(&data);
    
    let handle = thread::spawn(move || {
        let mut num = data_clone.lock().unwrap();
        *num += 1;
        panic!("Thread panicked while holding lock!");
    });
    
    let _ = handle.join();
    
    // Lock is poisoned now
    match data.lock() {
        Ok(guard) => println!("Value: {}", *guard),
        Err(poisoned) => {
            println!("Lock poisoned! Recovering data...");
            let guard = poisoned.into_inner();
            println!("Value: {}", *guard);
        }
    }
}

Use try_lock() when you can’t afford to block. It returns immediately with Err(TryLockError) if the lock is held:

use std::sync::Mutex;

let mutex = Mutex::new(5);

if let Ok(guard) = mutex.try_lock() {
    println!("Got lock: {}", *guard);
} else {
    println!("Lock busy, doing something else");
}

RwLock: Optimizing for Read-Heavy Workloads

Mutex<T> grants exclusive access whether you’re reading or writing. For read-heavy workloads, this is wasteful—multiple readers can safely access data simultaneously. RwLock<T> (read-write lock) optimizes this pattern by allowing either multiple readers or one writer.

use std::sync::{Arc, RwLock};
use std::thread;

fn main() {
    let cache = Arc::new(RwLock::new(vec![1, 2, 3, 4, 5]));
    let mut handles = vec![];
    
    // Spawn 5 readers
    for i in 0..5 {
        let cache = Arc::clone(&cache);
        handles.push(thread::spawn(move || {
            let data = cache.read().unwrap();
            println!("Reader {} sees: {:?}", i, *data);
        }));
    }
    
    // Spawn 1 writer
    let cache_writer = Arc::clone(&cache);
    handles.push(thread::spawn(move || {
        let mut data = cache_writer.write().unwrap();
        data.push(6);
        println!("Writer updated cache");
    }));
    
    for handle in handles {
        handle.join().unwrap();
    }
    
    println!("Final: {:?}", *cache.read().unwrap());
}

The read() method returns a RwLockReadGuard, while write() returns a RwLockWriteGuard. Multiple read guards can exist simultaneously, but a write guard requires exclusive access.

Performance gains depend on your read/write ratio. Here’s a realistic comparison:

use std::sync::{Arc, Mutex, RwLock};
use std::thread;
use std::time::Instant;

fn benchmark_mutex(iterations: usize) -> u128 {
    let data = Arc::new(Mutex::new(0));
    let start = Instant::now();
    
    let mut handles = vec![];
    for _ in 0..10 {
        let data = Arc::clone(&data);
        handles.push(thread::spawn(move || {
            for _ in 0..iterations {
                let _val = *data.lock().unwrap(); // Read
            }
        }));
    }
    
    for handle in handles {
        handle.join().unwrap();
    }
    
    start.elapsed().as_micros()
}

fn benchmark_rwlock(iterations: usize) -> u128 {
    let data = Arc::new(RwLock::new(0));
    let start = Instant::now();
    
    let mut handles = vec![];
    for _ in 0..10 {
        let data = Arc::clone(&data);
        handles.push(thread::spawn(move || {
            for _ in 0..iterations {
                let _val = *data.read().unwrap(); // Read
            }
        }));
    }
    
    for handle in handles {
        handle.join().unwrap();
    }
    
    start.elapsed().as_micros()
}

In read-heavy scenarios (90%+ reads), RwLock typically outperforms Mutex by 5-10x. But with frequent writes, the extra overhead of managing reader/writer state makes RwLock slower than Mutex. Profile your specific workload.

Common Patterns and Best Practices

Minimize Critical Sections: Hold locks for the shortest time possible. Extract values, release the lock, then process:

use std::sync::Mutex;

let data = Mutex::new(vec![1, 2, 3]);

// Bad: holding lock during expensive operation
{
    let mut guard = data.lock().unwrap();
    for item in guard.iter_mut() {
        *item = expensive_computation(*item);
    }
}

// Good: copy data, release lock, then process
let copied_data = {
    let guard = data.lock().unwrap();
    guard.clone()
};

let processed = copied_data.iter()
    .map(|x| expensive_computation(*x))
    .collect::<Vec<_>>();

*data.lock().unwrap() = processed;

Avoid Deadlocks with Lock Ordering: Establish a consistent order for acquiring multiple locks:

use std::sync::Mutex;

struct BankAccount {
    balance: Mutex<u64>,
}

impl BankAccount {
    fn transfer(from: &BankAccount, to: &BankAccount, amount: u64) {
        // Deadlock risk if two threads call with reversed arguments
        // Thread 1: transfer(A, B, 100)
        // Thread 2: transfer(B, A, 50)
        
        // Fix: always lock in consistent order based on address
        let (first, second) = if from as *const _ < to as *const _ {
            (from, to)
        } else {
            (to, from)
        };
        
        let mut first_balance = first.balance.lock().unwrap();
        let mut second_balance = second.balance.lock().unwrap();
        
        // Now perform transfer safely
        if first as *const _ == from as *const _ {
            *first_balance -= amount;
            *second_balance += amount;
        } else {
            *second_balance -= amount;
            *first_balance += amount;
        }
    }
}

Consider Alternatives: Channels often provide cleaner concurrency than shared state. Atomic types (AtomicUsize, AtomicBool) offer lock-free alternatives for simple values.

Real-World Application: Thread-Safe Connection Pool

Let’s build a practical connection pool using RwLock to manage a collection of database connections:

use std::sync::{Arc, RwLock};
use std::collections::VecDeque;
use std::time::Duration;

#[derive(Clone)]
struct Connection {
    id: usize,
}

struct ConnectionPool {
    available: Arc<RwLock<VecDeque<Connection>>>,
    max_size: usize,
}

impl ConnectionPool {
    fn new(size: usize) -> Self {
        let mut connections = VecDeque::new();
        for i in 0..size {
            connections.push_back(Connection { id: i });
        }
        
        ConnectionPool {
            available: Arc::new(RwLock::new(connections)),
            max_size: size,
        }
    }
    
    fn acquire(&self) -> Option<Connection> {
        // Try read lock first (optimistic check)
        {
            let available = self.available.read().unwrap();
            if available.is_empty() {
                return None;
            }
        } // Release read lock
        
        // Acquire write lock to actually remove connection
        let mut available = self.available.write().unwrap();
        available.pop_front()
    }
    
    fn release(&self, conn: Connection) {
        let mut available = self.available.write().unwrap();
        if available.len() < self.max_size {
            available.push_back(conn);
        }
    }
    
    fn available_count(&self) -> usize {
        self.available.read().unwrap().len()
    }
}

fn main() {
    let pool = Arc::new(ConnectionPool::new(5));
    let mut handles = vec![];
    
    for i in 0..10 {
        let pool = Arc::clone(&pool);
        handles.push(std::thread::spawn(move || {
            if let Some(conn) = pool.acquire() {
                println!("Thread {} acquired connection {}", i, conn.id);
                std::thread::sleep(Duration::from_millis(100));
                pool.release(conn);
                println!("Thread {} released connection", i);
            } else {
                println!("Thread {} couldn't acquire connection", i);
            }
        }));
    }
    
    for handle in handles {
        handle.join().unwrap();
    }
    
    println!("Final available: {}", pool.available_count());
}

This pool uses RwLock because checking availability is a read operation that many threads can perform concurrently. Only actual acquisition and release need write access. The two-phase check (read then write) minimizes write lock contention.

Rust’s Mutex and RwLock give you the tools for safe shared state concurrency, but you still need to design carefully. Choose Mutex by default for simplicity, reach for RwLock when profiling shows read contention, and always consider whether message passing would be cleaner than shared state.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.