Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Linux: Large int array: mmap vs seek file?

Suppose I have a dataset that is an array of 1e12 32-bit ints (4 TB) stored in a file on a 4TB HDD ext4 filesystem..

Consider that the data is most likely random (or at least seems random).

// pseudo-code
for (long long i = 0; i < (1LL << 40); i++)
   SetFileIntAt(i) = GetRandInt();

Further, consider that I wish to read individual int elements in an unpredictable order and that the algorithm runs indefinately (it is on-going).

// pseudo-code
while (true)
   UseInt(GetFileInt(GetRand(1<<40)));

We are on Linux x86_64, gcc. You can assume system has 4GB of RAM (ie 1000x less than dataset)

The following are two ways to architect access:

(A) mmap the file to a 4TB block of memory, and access it as an int array

(B) open(2) the file and use seek(2) and read(2) to read the ints.

Out of A and B which will have the better performance?, and why?

Is there another design that will give better performance than either A or B?

like image 396
Andrew Tomazos Avatar asked Jun 14 '12 07:06

Andrew Tomazos


People also ask

When should I use mmap?

mmap is great if you have multiple processes accessing data in a read only fashion from the same file, which is common in the kind of server systems I write. mmap allows all those processes to share the same physical memory pages, saving a lot of memory.

Are memory mapped files faster?

Accessing memory mapped files is faster than using direct read and write operations for two reasons. Firstly, a system call is orders of magnitude slower than a simple change to a program's local memory.

Is mmap faster?

It's not a myth at all, mmap is faster, you save on straight copies of data and the sys-calls to do it.

Why is mmap slow?

Even though it is important and often used, mmap can be slow and inconsistent in its timing. Mmap maps memory pages directly to bytes on disk. With mmap, whenever there is a pagefault, the kernel pulls the page directly from disk into the program's memory space.


2 Answers

On the one hand, you have extensive use of memory swap resulting in minor pagefaults, transparent for the applicative. On the other one, you have numerous system calls, with the known overhead. The Wikipedia page about memory-mapped file seems to be quite clear to me, it browses in an comprehensive manner pros and cons.

I think 64bit architecture + large file call for a memory-mapped file approach, at least to keep from complexifying the applicative; I have been told that complexity often leads to poor performance. However mmap() is usual for sequential access, which is not the purpose here.

Because this is pure random access, there is few chance that two accesses will be in the same RAM-loaded page. A full 4kb page will be swapped from the HDD to the RAM, just for a 4 bytes data... This is useless loading of buses and will probably result in poor performances.

Hope this help.

like image 123
Benoit Avatar answered Sep 20 '22 15:09

Benoit


I'd say performance should be similar if access is truly random. The OS will use a similar caching strategy whether the data page is mapped from a file or the file data is simply cached without an association to RAM.

Assuming cache is ineffective:

  • You can use fadvise to declare your access pattern in advance and disable readahead.
  • Due to address space layout randomization, there might not be a contiguous block of 4 TB in your virtual address space.
  • If your data set ever expands, the address space issue might become more pressing.

So I'd go with explicit reads.

like image 37
Simon Richter Avatar answered Sep 19 '22 15:09

Simon Richter