When writing relatively realtime code, generally heap allocations in the main execution loop are avoided. So in my experience you allocate all the memory your program needs in an initialization step, and then pass the memory around as needed. A toy example in C might look something like the following:
#include <stdlib.h>
#define LEN 100
void not_realtime() {
int *v = malloc(LEN * sizeof *v);
for (int i = 0; i < LEN; i++) {
v[i] = 1;
}
free(v);
}
void realtime(int *v, int len) {
for (int i = 0; i < len; i++) {
v[i] = 1;
}
}
int main(int argc, char **argv) {
not_realtime();
int *v = malloc(LEN * sizeof *v);
realtime(v, LEN);
free(v);
}
And I believe roughly the equivalent in Rust:
fn possibly_realtime() {
let mut v = vec![0; 100];
for i in 0..v.len() {
v[i] = 1;
}
}
fn realtime(v: &mut Vec<i32>) {
for i in 0..v.len() {
v[i] = 1;
}
}
fn main() {
possibly_realtime();
let mut v: Vec<i32> = vec![0; 100];
realtime(&mut v);
}
What I'm wondering is: is Rust able to optimize possibly_realtime
such that the local heap allocation of v
only occurs once and is reused on subsequent calls to possibly_realtime
? I'm guessing not but maybe there's some magic that makes it possible.
Usually Rust avoid allocating anything on the heap. Never will the compiler do an implicit allocation on the heap, but may library functions can do it for you. At least anything that is dynamically sized (eg. Vec<T> ) will need something on the heap under the hood, for the rest, the documentation should hint it.
Vec. Vec is a heap-allocated type with a great deal of scope for optimizing the number of allocations, and/or minimizing the amount of wasted space.
The heap is a memory used by programming languages to store global variables. By default, all global variable are stored in heap memory space. It supports Dynamic memory allocation. The heap is not managed automatically for you and is not as tightly managed by the CPU.
As of 2021, Rust is capable of optimizing out heap allocation and inlining vtable method calls (playground):
fn old_adder(a: f64) -> Box<dyn Fn(f64)->f64> {
Box::new(move |x| a + x)
}
#[inline(never)]
fn test() {
let adder = old_adder(1.);
assert_eq!(adder(1.), 2.);
}
fn main() {
test();
}
To investigate this, it is useful to add #[inline(never)]
to your function, then view the LLVM IR on the playground.
This is not optimized. Here's an excerpt:
; playground::possibly_realtime
; Function Attrs: noinline nonlazybind uwtable
define internal fastcc void @_ZN10playground17possibly_realtime17h2ab726cd567363f3E() unnamed_addr #0 personality i32 (i32, i32, i64, %"unwind::libunwind::_Unwind_Exception"*, %"unwind::libunwind::_Unwind_Context"*)* @rust_eh_personality {
start:
%0 = tail call i8* @__rust_alloc_zeroed(i64 400, i64 4) #9, !noalias !8
%1 = icmp eq i8* %0, null
br i1 %1, label %bb20.i.i.i.i, label %vector.body
Every time that possibly_realtime
is called, memory is allocated via __rust_alloc_zeroed
.
This is not optimized. Here's an excerpt:
; Function Attrs: noinline uwtable
define internal fastcc void @_ZN17possibly_realtime20h1a3a159dd4b50685eaaE() unnamed_addr #0 {
entry-block:
%0 = tail call i8* @je_mallocx(i64 400, i32 0), !noalias !0
%1 = icmp eq i8* %0, null
br i1 %1, label %then-block-255-.i.i, label %normal-return2.i
Every time that possibly_realtime
is called, memory is allocated via je_mallocx
.
Reusing a buffer is a great way to leak secure information, and I'd encourage you to avoid it as much as possible. I'm sure you are already familiar with these problems, but I want to make sure that future searchers make a note.
I also doubt that this "optimization" will be added to Rust, especially not without explicit opt-in by the programmer. There needs to be somewhere that the pointer to the allocated memory could be stored, but there really isn't anywhere. That means it would need to be a global or thread-local variable! Rust can run in environments without threads, but a global variable would still preclude recursive calls to this method. All in all, I think that passing the buffer into the method is much more explicit about what will happen.
I also assume that your example uses a Vec
with a fixed size for demo purposes, but if you truly know the size at compile time, a fixed-size array could be a better choice.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With