Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

In Rust, is there a way to directly read the content of a file into the given uninitialized byte array?

Tags:

rust

I am looking for a way to directly read the content of a file into the provided uninitialized byte array.

Currently, I have a code like the following:

use std::fs::File;
use std::mem::MaybeUninit;

let buf: MaybeUninit<[u8; 4096]> = MaybeUninit::zeroed();
let f = File::open("some_file")?;
f.read(buf.as_mut_ptr().as_mut().unwrap())?;

The code does work, except that it unnecessarily initializes the byte array with 0. I would like to replace MaybeUninit::zeroed() with MaybeUninit::uninit() but doing so will trigger an undefined behavior according to the document of MaybeUninit. Is there a way to initialize an uninitialized memory region with the content of the file without first reading the data to somewhere else, by only using the standard library? Or do we need to go for the OS-specific API?

like image 254
Seiichi Uchida Avatar asked Sep 16 '19 04:09

Seiichi Uchida


1 Answers

The previous shot at the answer is kept below for posterity. Let's deal with the actual elephant in the room:

Is there a way to initialize an uninitialized memory region with the content of the file without first reading the data to somewhere else, by only using the standard library? Or do we need to go for the OS-specific API?

There is: Read::read_to_end(&mut self, &mut Vec<u8>)

This function will drain your impl Read object, and depending on the underlying implementation will do one or more reads, extending the Vec provided as it goes and appending all bytes to it.

It then returns the number of bytes read. It can also be interrupted, and this error needs to be handled.


You are trying to micro-optimize something based on heuristics you think are the case, when they are not.

The initialization of the array is done in one go as low-level as it can get with memset, all in one chunk. Both calloc and malloc+memset are highly optimized, calloc relies on a trick or two to make it even more performant. Somebody on codereview pitted "highly optimized code" against a naive implementation and lost as a result.

The takeaway is that second-guessing the compiler is typically fraught with issues and, overall, not worth micro-optimizing for unless you can put some real numbers on the issues.

The second takeaway is one of memory logic. As I am sure you are aware, allocation of memory is dramatically faster in some cases depending on the position of the memory you are allocating and the size of the contiguous chunk you are allocating, due to how memory is laid out in atomic units (pages). This is a much more impactful factor, to the point that below the hood, the compiler will often align your memory request to an entire page to avoid having to fragment it, particularly as it gets into L1/L2 caches.

If anything isn't clear, let me know and I'll generate some small benchmarks for you.

Finally, MaybeUninit is not at all the tool you want for the job in any case. The point of MaybeUninit isn't to skip a memset or two, since you will be performing those memsets yourself by having to guarantee (by contract due to assume_init) that those types are sane. There are cases for this, but they're rare.

In larger cases

There is an impact on performance in uninitializing vs. initializing memory, and we're going to show this by taking an absolutely perfect scenario: we're going to make ourselves a 64M buffer in memory and wrap it in a Cursor so we get a Read type. This Read type will have latency far, far inferior to most I/O operations you will encounter in the wild, since it is almost guaranteed to reside entirely in L2 cache during the benchmark cycle (due to its size) or L3 cache (because we're single-threaded). This should allow us to notice the performance loss from memsetting.

We're going to run three versions for each case (the code):

  • One where we define out buffer as [MaybeUninit::uninit().assume_init(); N], i.e. we're taking N chunks of MaybeUninit<u8>
  • One where out MaybeUninit is a contiguous N-element long chunk
  • One where we're just mapping straight into an initialized buffer

The results (on a core i9-9900HK laptop):

large reads/one uninit  time:   [1.6720 us 1.7314 us 1.7848 us]

large reads/small uninit elements
                        time:   [2.1539 us 2.1597 us 2.1656 us]

large reads/safe        time:   [2.0627 us 2.0697 us 2.0771 us]

small reads/one uninit  time:   [4.5579 us 4.5722 us 4.5893 us]

small reads/small uninit elements
                        time:   [5.1050 us 5.1219 us 5.1383 us]

small reads/safe        time:   [7.9654 us 7.9782 us 7.9889 us]

The results are as expected:

  • Allocating N MaybeUninit is slower than one huge chunk; this is completely expected and should not come as a surprise.
  • Small, iterative 4096-byte reads are slower than a huge, single, 128M read even when the buffer only contains 64M
  • There is a small performance loss in reading using initialized memory, of about 30%
  • Opening anything else on the laptop while testing causes a 50%+ increase in benchmarked time

The last point is particularly important, and it becomes even more important when dealing with real I/O as opposed to a buffer in memory. The more layers of cache you have to traverse, the more side-effects you get from other processes impacting your own processing. If you are reading a file, you will typically encounter:

  • The filesystem cache (may or may not be swapped)
  • L3 cache (if on the same core)
  • L2 cache
  • L1 cache

Depending on the level of the cache that produces a cache miss, you're more or less likely to have your performance gain from using uninitialized memory dwarfed by the performance loss in having a cache miss.

So, the (unexpected TL;DR):

  • Small, iterative reads are slower
  • There is a performance gain in using MaybeUninit but it is typically an order of magnitude less than any I/O opt
like image 132
Sébastien Renauld Avatar answered Sep 29 '22 23:09

Sébastien Renauld