In Rust, Rc(or Reference counting pointer) is what allows variables having co-ownership over one value.
Since this is one of the smart pointer, it allocates value to heap space and manages two additional values weak reference count and strong reference count.
If the strong count reaches zero, meaning it's referenced no else where, it drops its content. weak reference is used only when you don't want to interfere with reference counting yet perhaps you need the value at the risk of value not existing. But even that's safe as you have to cover match cases in the end.
Atomic reference counting pointer is thread-safe equivalent to Rc. Unlike Rc which is not Send nor Sync, you can safely pass this to different thread or task which you may find interesting considering that in the end reference counting pointer is also a pointer - if the referenced value goes out of scope, it may point to null. So it's amazing that language can guarantee even under this situation.
But, safety is not free lunch. It definitely comes with a bit of cost.
The following is how big that cost can be.
black_box is a function provided by core library that hints to the compiler to be MAXIMALLY pessimistic about what could do. So, this is such a great tool for benchmarking. Nonetheless, be mindful that this is still "best-effort" not the most "correct" one.
Rcfn benchmark_rc() -> std::time::Duration {
let start = Instant::now();
let vc = Rc::new(RefCell::new(Vec::with_capacity(ITERATIONS)));
for i in 0..ITERATIONS {
let rc = black_box(i);
black_box(vc.borrow_mut().push(rc));
black_box(rc);
}
start.elapsed()
}
Arcfn benchmark_arc() -> std::time::Duration {
let start = Instant::now();
let vc = Arc::new(RefCell::new(Vec::with_capacity(ITERATIONS)));
for i in 0..ITERATIONS {
let arc = black_box(i);
black_box(vc.borrow_mut().push(arc));
black_box(arc);
}
start.elapsed()
}
fn benchmark_prim() -> std::time::Duration {
let start = Instant::now();
let mut vc = Vec::with_capacity(ITERATIONS);
for i in 0..ITERATIONS {
let arc = black_box(i);
black_box(vc.push(arc));
black_box(arc);
}
start.elapsed()
}
Now, let's run the following.
fn main() {
// Warm-up runs
benchmark_arc();
benchmark_rc();
benchmark_prim();
// Actual measurements
let arc_times: Vec<_> = (0..10).map(|_| benchmark_arc()).collect();
let avg_arc_time = arc_times.iter().sum::<std::time::Duration>() / arc_times.len() as u32;
println!("Arc average time: {:?}", avg_arc_time);
let rc_times: Vec<_> = (0..10).map(|_| benchmark_rc()).collect();
let avg_rc_time = rc_times.iter().sum::<std::time::Duration>() / rc_times.len() as u32;
println!("Rc average time: {:?}", avg_rc_time);
let prim_times: Vec<_> = (0..10).map(|_| benchmark_prim()).collect();
let avg_prim_time = prim_times.iter().sum::<std::time::Duration>() / prim_times.len() as u32;
println!("Prim average time: {:?}", avg_prim_time);
}
For this particular test, I run each of them 10 times, take an average with the repetition of 100_000_000. The result of which is as follows:
Arc average time: 182.991708ms
Rc average time: 144.623287ms
Prim average time: 108.992283ms
So, at least on my machine(Mac M3) the following conclusion can be drawn:
Rc is around 26% faster than Arc
Single Ownership type is around 33% faster than
Rccounterpart
Single Ownership type is around 68% faster than type wrapped inside
Arc
Note that it doesn't mean that you should indulge yourself in single ownership fixation - sometimes it's just inevitable or isn't it?