Rust Send and Sync: Compile-Time Thread Safety
Data races are insidious. They corrupt memory silently, cause heisenbugs that vanish under debuggers, and turn production systems into ticking time bombs. C++ gives you threads and hopes you know...
Key Insights
SendandSyncare marker traits that encode thread safety into Rust’s type system, catching data races at compile time rather than runtime- Most types automatically implement these traits through composition—if all fields are
Send/Sync, the containing type is too - Understanding when to use
Arc<Mutex<T>>vsArc<RwLock<T>>vs channels comes down to access patterns and contention characteristics
The Data Race Problem
Data races are insidious. They corrupt memory silently, cause heisenbugs that vanish under debuggers, and turn production systems into ticking time bombs. C++ gives you threads and hopes you know what you’re doing. Java wraps everything in synchronized blocks and prays you don’t forget one. Go’s race detector catches problems at runtime—if your tests happen to trigger them.
Rust takes a different approach: make data races a compile-time error.
The ownership system you already know—move semantics, borrowing, lifetimes—extends naturally to threading. The Send and Sync traits encode thread safety requirements into the type system itself. If your code compiles, you’ve proven to the compiler that no data races exist. No runtime checks, no performance overhead, no 3 AM pages about corrupted state.
This isn’t magic. It’s type theory applied to a real problem. Let’s see how it works.
Understanding the Send Trait
Send answers a simple question: can this value be safely moved to another thread?
Most types are Send. Integers, strings, vectors, structs containing only Send types—they all transfer ownership cleanly. The new thread becomes the sole owner, and Rust’s normal ownership rules prevent the original thread from accessing the moved value.
use std::thread;
fn send_works() {
let data = vec![1, 2, 3, 4, 5];
let handle = thread::spawn(move || {
// `data` has been moved here—we own it now
println!("Sum: {}", data.iter().sum::<i32>());
});
// This won't compile: data was moved
// println!("{:?}", data);
handle.join().unwrap();
}
The move keyword transfers ownership of data into the closure. The spawned thread now owns the vector exclusively. Clean, safe, efficient.
Now consider Rc<T>, Rust’s reference-counted pointer:
use std::rc::Rc;
use std::thread;
fn rc_fails() {
let data = Rc::new(vec![1, 2, 3]);
// This won't compile!
let handle = thread::spawn(move || {
println!("{:?}", data);
});
}
The compiler rejects this with a clear message:
error[E0277]: `Rc<Vec<i32>>` cannot be sent between threads safely
|
| let handle = thread::spawn(move || {
| ^^^^^^^^^^^^^ `Rc<Vec<i32>>` cannot be sent between threads safely
|
= help: the trait `Send` is not implemented for `Rc<Vec<i32>>`
Why? Rc uses non-atomic reference counting. If two threads increment or decrement the count simultaneously, you get memory corruption. Rust doesn’t add synchronization overhead you didn’t ask for—it simply refuses to compile unsafe code.
Understanding the Sync Trait
Send handles ownership transfer. Sync handles shared access: can multiple threads safely hold references to this value simultaneously?
The formal definition is elegant: T is Sync if and only if &T is Send. In other words, a type is Sync when sharing immutable references across threads is safe.
Primitive types like i32 are Sync—multiple threads can read the same integer without coordination:
use std::thread;
fn sync_primitives() {
let value: i32 = 42;
thread::scope(|s| {
s.spawn(|| println!("Thread 1 sees: {}", value));
s.spawn(|| println!("Thread 2 sees: {}", value));
s.spawn(|| println!("Thread 3 sees: {}", value));
});
}
Scoped threads let us borrow value rather than move it. All three threads read the same memory location safely because i32 is Sync.
RefCell<T> breaks this guarantee:
use std::cell::RefCell;
use std::thread;
fn refcell_fails() {
let cell = RefCell::new(42);
// This won't compile!
thread::scope(|s| {
s.spawn(|| {
*cell.borrow_mut() += 1;
});
s.spawn(|| {
*cell.borrow_mut() += 1;
});
});
}
RefCell provides interior mutability through runtime borrow checking—but that checking isn’t thread-safe. Two threads could both call borrow_mut() simultaneously, bypassing the runtime checks and creating a data race. The compiler catches this:
error[E0277]: `RefCell<i32>` cannot be shared between threads safely
|
= help: the trait `Sync` is not implemented for `RefCell<i32>`
Auto Traits and Composition
Send and Sync are auto traits. The compiler derives them automatically based on a type’s fields. If every field is Send, the struct is Send. If every field is Sync, the struct is Sync.
This compositional approach catches problems early:
use std::sync::{Arc, Mutex};
// Automatically implements Send + Sync
struct ThreadSafeCache {
data: Arc<Mutex<Vec<String>>>,
max_size: usize,
}
// Can be shared across threads without explicit annotation
fn use_cache(cache: &ThreadSafeCache) {
let data = cache.data.clone();
std::thread::spawn(move || {
let mut guard = data.lock().unwrap();
guard.push("new entry".to_string());
});
}
ThreadSafeCache automatically implements both traits because Arc<Mutex<Vec<String>>> and usize both do.
Now add a raw pointer:
struct UnsafeCache {
data: Arc<Mutex<Vec<String>>>,
raw_ptr: *mut u8, // Raw pointers are neither Send nor Sync
}
fn try_send_unsafe(cache: UnsafeCache) {
// Won't compile!
std::thread::spawn(move || {
println!("Trying to use cache");
});
}
One non-Send field poisons the entire struct. The compiler forces you to think about thread safety when you introduce unsafe primitives.
Common Thread-Safe Primitives
Here’s your cheat sheet for concurrent Rust:
| Type | Send | Sync | Use Case |
|---|---|---|---|
Arc<T> |
if T: Send + Sync | if T: Send + Sync | Shared ownership across threads |
Mutex<T> |
if T: Send | Yes | Exclusive mutable access |
RwLock<T> |
if T: Send | if T: Send + Sync | Multiple readers, single writer |
mpsc::Sender<T> |
if T: Send | No | Channel producer |
AtomicU64 |
Yes | Yes | Lock-free counters/flags |
The classic shared counter pattern:
use std::sync::{Arc, Mutex};
use std::thread;
fn shared_counter() {
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];
for _ in 0..10 {
let counter = Arc::clone(&counter);
let handle = thread::spawn(move || {
for _ in 0..1000 {
let mut num = counter.lock().unwrap();
*num += 1;
}
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
println!("Final count: {}", *counter.lock().unwrap());
// Always prints: Final count: 10000
}
Arc provides shared ownership (multiple threads can hold references). Mutex provides synchronized access (only one thread can modify at a time). Together, they give you safe shared mutable state.
For read-heavy workloads, prefer RwLock:
use std::sync::{Arc, RwLock};
let config = Arc::new(RwLock::new(load_config()));
// Many readers, no blocking
let cfg = config.read().unwrap();
process_request(&cfg);
// Exclusive writer access
let mut cfg = config.write().unwrap();
*cfg = reload_config();
Unsafe Implementations and When They’re Needed
Sometimes you need to manually implement Send or Sync. This happens with FFI wrappers, custom synchronization primitives, or when you know something the compiler doesn’t.
Consider wrapping a C library’s handle:
// C library provides thread-safe handle
extern "C" {
fn create_handle() -> *mut Handle;
fn use_handle(h: *mut Handle);
fn destroy_handle(h: *mut Handle);
}
struct Handle {
_private: [u8; 0],
}
pub struct SafeHandle {
ptr: *mut Handle,
}
// SAFETY: The underlying C library documents that Handle
// is thread-safe and can be used from any thread
unsafe impl Send for SafeHandle {}
unsafe impl Sync for SafeHandle {}
impl SafeHandle {
pub fn new() -> Self {
Self {
ptr: unsafe { create_handle() },
}
}
pub fn use_it(&self) {
unsafe { use_handle(self.ptr) }
}
}
impl Drop for SafeHandle {
fn drop(&mut self) {
unsafe { destroy_handle(self.ptr) }
}
}
The unsafe impl is a promise to the compiler: “I’ve verified the invariants you can’t check.” You’re asserting that:
- For
Send: transferring this value between threads won’t cause data races - For
Sync: sharing references between threads won’t cause data races
Get this wrong, and you’ve introduced undefined behavior. The compiler trusted you. Document your reasoning thoroughly.
You can also explicitly opt out of auto-derived implementations:
use std::marker::PhantomData;
struct NotSend {
data: i32,
_marker: PhantomData<*const ()>, // *const () is !Send
}
Fearless Concurrency in Practice
The Send and Sync traits transform threading bugs from runtime mysteries into compile-time errors. When your concurrent Rust code compiles, you’ve proven—mathematically, to the compiler—that no data races exist.
This shifts debugging effort dramatically. Instead of reproducing race conditions under specific timing, you fix type errors at your desk. Instead of deploying runtime race detectors that miss edge cases, you get exhaustive static analysis for free.
The cost is learning to think in terms of ownership and borrowing. The payoff is concurrent code you can reason about, refactor confidently, and deploy without fear.
That’s the Rust promise: if it compiles, it works. For threading, that promise is backed by Send and Sync.