Rust Rc vs Arc: Single vs Multi-Threaded Reference Counting

Rust's ownership system enforces single ownership by default, which prevents data races and memory issues at compile time. But real-world programs often need shared ownership—multiple parts of your...

Key Insights

  • Rc<T> provides fast reference counting for single-threaded code, while Arc<T> uses atomic operations for thread-safe sharing at a ~2x performance cost
  • The compiler enforces the distinction through Send and Sync traits—Rc won’t compile in multi-threaded contexts, preventing subtle bugs at compile time
  • Reference cycles can leak memory with both types; use Weak<T> pointers to break cycles in data structures like trees and graphs

Introduction to Reference Counting in Rust

Rust’s ownership system enforces single ownership by default, which prevents data races and memory issues at compile time. But real-world programs often need shared ownership—multiple parts of your code legitimately need access to the same data. Consider a graph where nodes can have multiple parents, or cached data shared across different subsystems.

Here’s where you hit the wall with basic ownership:

struct Node {
    value: i32,
    children: Vec<Node>,
}

fn main() {
    let node = Node { value: 1, children: vec![] };
    let parent1 = Node { value: 2, children: vec![node] };
    let parent2 = Node { value: 3, children: vec![node] }; // Error: value moved
}

The compiler rejects this because node can’t have two owners. This is where reference counting enters: Rc<T> and Arc<T> enable shared ownership by tracking how many references exist and deallocating when the count hits zero.

Rc: Single-Threaded Reference Counting

Rc<T> (Reference Counted) provides shared ownership for single-threaded scenarios. It maintains a reference count using simple integer increments and decrements—no atomic operations, no thread synchronization overhead.

Here’s basic usage:

use std::rc::Rc;

fn main() {
    let data = Rc::new(vec![1, 2, 3, 4]);
    println!("Reference count: {}", Rc::strong_count(&data)); // 1
    
    let data2 = Rc::clone(&data);
    println!("Reference count: {}", Rc::strong_count(&data)); // 2
    
    {
        let data3 = Rc::clone(&data);
        println!("Reference count: {}", Rc::strong_count(&data)); // 3
    } // data3 dropped here
    
    println!("Reference count: {}", Rc::strong_count(&data)); // 2
}

Notice Rc::clone(&data) doesn’t deep-copy the data—it just increments the reference count and returns a new pointer. This is cheap: just a pointer copy and an integer increment.

Here’s a practical example with a graph structure:

use std::rc::Rc;

struct Node {
    value: i32,
    children: Vec<Rc<Node>>,
}

fn main() {
    let leaf = Rc::new(Node { value: 3, children: vec![] });
    
    let branch1 = Rc::new(Node {
        value: 1,
        children: vec![Rc::clone(&leaf)],
    });
    
    let branch2 = Rc::new(Node {
        value: 2,
        children: vec![Rc::clone(&leaf)],
    });
    
    println!("Leaf shared by {} parents", Rc::strong_count(&leaf) - 1); // 2
}

For interior mutability, combine Rc with RefCell:

use std::rc::Rc;
use std::cell::RefCell;

struct SharedCounter {
    value: RefCell<i32>,
}

fn main() {
    let counter = Rc::new(SharedCounter {
        value: RefCell::new(0),
    });
    
    let counter2 = Rc::clone(&counter);
    
    *counter.value.borrow_mut() += 1;
    *counter2.value.borrow_mut() += 1;
    
    println!("Counter: {}", counter.value.borrow()); // 2
}

Arc: Atomic Reference Counting for Concurrency

Arc<T> (Atomically Reference Counted) provides the same shared ownership semantics but with thread-safety guarantees. It uses atomic operations for the reference count, making it safe to share across threads.

Basic multi-threaded usage:

use std::sync::Arc;
use std::thread;

fn main() {
    let data = Arc::new(vec![1, 2, 3, 4]);
    let mut handles = vec![];
    
    for i in 0..3 {
        let data_clone = Arc::clone(&data);
        let handle = thread::spawn(move || {
            println!("Thread {}: {:?}", i, data_clone);
        });
        handles.push(handle);
    }
    
    for handle in handles {
        handle.join().unwrap();
    }
}

For shared mutable state, combine Arc with Mutex:

use std::sync::{Arc, Mutex};
use std::thread;

fn main() {
    let counter = Arc::new(Mutex::new(0));
    let mut handles = vec![];
    
    for _ in 0..10 {
        let counter_clone = Arc::clone(&counter);
        let handle = thread::spawn(move || {
            let mut num = counter_clone.lock().unwrap();
            *num += 1;
        });
        handles.push(handle);
    }
    
    for handle in handles {
        handle.join().unwrap();
    }
    
    println!("Result: {}", *counter.lock().unwrap()); // 10
}

The atomic operations in Arc aren’t free. Here’s a simple comparison showing the overhead:

use std::rc::Rc;
use std::sync::Arc;
use std::time::Instant;

fn benchmark_rc() {
    let data = Rc::new(vec![1, 2, 3]);
    let start = Instant::now();
    
    for _ in 0..1_000_000 {
        let _clone = Rc::clone(&data);
    }
    
    println!("Rc: {:?}", start.elapsed());
}

fn benchmark_arc() {
    let data = Arc::new(vec![1, 2, 3]);
    let start = Instant::now();
    
    for _ in 0..1_000_000 {
        let _clone = Arc::clone(&data);
    }
    
    println!("Arc: {:?}", start.elapsed());
}

On typical hardware, Arc operations take roughly 2x longer than Rc operations due to atomic synchronization.

Key Differences and When to Use Each

The fundamental difference is thread-safety. Rc is not Send or Sync, meaning the compiler prevents you from sending it across thread boundaries:

use std::rc::Rc;
use std::sync::Arc;
use std::thread;

fn fails_to_compile() {
    let data = Rc::new(vec![1, 2, 3]);
    
    // This won't compile: Rc is not Send
    // thread::spawn(move || {
    //     println!("{:?}", data);
    // });
}

fn compiles_fine() {
    let data = Arc::new(vec![1, 2, 3]);
    
    thread::spawn(move || {
        println!("{:?}", data);
    });
}

Decision criteria:

  • Use Rc when: You need shared ownership in single-threaded code. Examples: UI component trees, single-threaded graphs, caching within a single thread
  • Use Arc when: You need to share data across threads. Examples: thread pools with shared configuration, concurrent data structures, parallel processing pipelines

Never use Arc just because you “might need threading later.” The performance overhead is real, and premature optimization for threading adds complexity. Start with Rc and refactor to Arc when you actually introduce concurrency.

Common Patterns and Pitfalls

The biggest pitfall with both Rc and Arc is reference cycles, which cause memory leaks:

use std::rc::Rc;
use std::cell::RefCell;

struct Node {
    value: i32,
    parent: RefCell<Option<Rc<Node>>>,
    children: RefCell<Vec<Rc<Node>>>,
}

fn create_cycle() {
    let parent = Rc::new(Node {
        value: 1,
        parent: RefCell::new(None),
        children: RefCell::new(vec![]),
    });
    
    let child = Rc::new(Node {
        value: 2,
        parent: RefCell::new(Some(Rc::clone(&parent))),
        children: RefCell::new(vec![]),
    });
    
    parent.children.borrow_mut().push(Rc::clone(&child));
    
    // Memory leak: parent and child reference each other
    // Neither will ever be deallocated
}

Break cycles with Weak<T> pointers, which don’t increment the strong reference count:

use std::rc::{Rc, Weak};
use std::cell::RefCell;

struct Node {
    value: i32,
    parent: RefCell<Option<Weak<Node>>>, // Weak prevents cycle
    children: RefCell<Vec<Rc<Node>>>,
}

fn no_cycle() {
    let parent = Rc::new(Node {
        value: 1,
        parent: RefCell::new(None),
        children: RefCell::new(vec![]),
    });
    
    let child = Rc::new(Node {
        value: 2,
        parent: RefCell::new(Some(Rc::downgrade(&parent))),
        children: RefCell::new(vec![]),
    });
    
    parent.children.borrow_mut().push(Rc::clone(&child));
    
    // No leak: child holds weak reference to parent
}

Performance Considerations and Best Practices

The performance difference between Rc and Arc matters in hot paths. In a tight loop cloning references millions of times, Rc will significantly outperform Arc. But in typical application code where you’re cloning references occasionally, the difference is negligible.

Best practices:

  1. Default to owned types. Only use Rc/Arc when you genuinely need shared ownership
  2. Prefer Rc in single-threaded code. Don’t pay for atomics you don’t need
  3. Watch for cycles. Use Weak for parent pointers, cache back-references, and observer patterns
  4. Combine with interior mutability carefully. Rc<RefCell<T>> and Arc<Mutex<T>> are common, but understand the runtime cost of the interior mutability pattern
  5. Profile before optimizing. The difference between Rc and Arc rarely matters compared to algorithmic improvements

Reference counting is a tool for specific ownership patterns. When Rust’s compiler forces you to think about ownership, that’s a feature, not a bug. Use Rc and Arc deliberately, understanding the tradeoffs you’re making.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.