Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using files for for shared memory IPC

In my application, there is one process which writes data to a file, and then, in response to receiving a request, will send (some) of that data via the network to the requesting process. The basis of this question is to see if we can speed up communication when both processes happen to be on the same host. (In my case, the processes are Java, but I think this discussion can apply more broadly.)

There are a few projects out there which use the MappedByteBuffers returned by Java's FileChannel.map() as a way to have shared memory IPC between JVMs on the same host (see Chronicle Queue, Aeron IPC, etc.).

One approach to speeding up same-host communication would be to have my application use one of those technologies to provide the request-response pathway for same-host communication, either in conjunction with the existing mechanism for writing to the data file, or by providing a unified means of both communication and writing to the file.

Another approach would be to allow the requesting process to have direct access to the data file.

I tend to favor the second approach - assuming it would be correct - as it would be easier to implement, and seems more efficient than copying/transmitting a copy of the data for each request (assuming we didn't replace the existing mechanism for writing to the file).

Essentially, I'd like to understanding what exactly occurs when two processes have access to the same file, and use it to communicate, specifically Java (1.8) and Linux (3.10).

From my understanding, it seems like if two processes have the same file open at the same time, the "communication" between them will essentially be via "shared memory".

Note that this question is not concerned with the performance implication of using a MappedByteBuffer or not - it seem highly likely that a using mapped buffers, and the reduction in copying and system calls, will reduce overhead compared to reading and writing the file, but that might require significant changes to the application.

Here is my understanding:

  1. When Linux loads a file from disk, it copies the contents of that file to pages in memory. That region of memory is called the page cache. As far as I can tell, it does this regardless of which Java method (FileInputStream.read(), RandomAccessFile.read(), FileChannel.read(), FileChannel.map()) or native method is used to read the file (obseved with "free" and monitoring the "cache" value).
  2. If another process attempts to load the same file (while it is still resident in the cache) the kernel detects this and doesn't need to reload the file. If the page cache gets full, pages will get evicted - dirty ones being written back out to the disk. (Pages also get written back out if there is an explicit flush to disk, and periodically, with a kernel thread).
  3. Having a (large) file already in the cache is a significant performance boost, much more so than the differences based on which Java methods we use to open/read that file.
  4. If a file is loaded using the mmap system call (C) or via FileChannel.map() (Java), essentially the file's pages (in the cache) are loaded directly into the process' address space. Using other methods to open a file, the file is loaded into pages not in the process' address space, and then the various methods to read/write that file copy some bytes from/to those pages into a buffer in the process' address space. There is an obvious performance benefit avoiding that copy, but my question isn't concerned with performance.

So in summary, if I understand correctly - while mapping offer a performance advantage, it doesn't seem like it offers any "shared memory" functionality that we don't already get just from the nature of the Linux and the page cache.

So, please let me know where my understanding is off.

Thanks.

like image 366
dan.m was user2321368 Avatar asked May 22 '20 19:05

dan.m was user2321368


People also ask

How is IPC using shared memory done?

Two functions shmget() and shmat() are used for IPC using shared memory. shmget() function is used to create the shared memory segment, while the shmat() function is used to attach the shared segment with the process's address space.

Is IPC faster than shared memory?

Fastest IPC mechanism in OS is Shared Memory. Shared memory is faster because the data is not copied from one address space to another, memory allocation is done only once, andsyncronisation is up to the processes sharing the memory.

What function is used to write into a shared memory?

The shared memory functions are: shmat() (Attach Shared Memory Segment to Current Process) returns the address of the shared memory segment associated with the specified shared memory identifier.


1 Answers

Essentially, I'm trying to understand what happens when two processes have the same file open at the same time, and if one could use this to safely and performantly offer communication between to processes.

If you are using regular files using read and write operations (i.e. not memory mapping them) then the two processes do not share any memory.

  • User-space memory in the Java Buffer objects associated with the file is NOT shared across address spaces.
  • When a write syscall is made, data is copied from pages in one processes address space to pages in kernel space. (These could be pages in the page cache. That is OS specific.)
  • When a read syscall is made, data is copied from pages in kernel space to pages in the reading processes address space.

It has to be done that way. If the operating system shared pages associated with the reader and writer processes buffers behind their backs, then that would be an security / information leakage hole:

  • The reader would be able to see data in the writer's address space that had not yet been written via write(...), and maybe never would be.
  • The writer would be able to see data that the reader (hypothetically) wrote into its read buffer.
  • It would not be possible to address the problem by clever use of memory protection because the granularity of memory protection is a page versus the granularity of read(...) and write(...) which is as little as a single byte.

Sure: you can safely use reading and writing files to transfer data between two processes. But you would need to define a protocol that allows the reader to know how much data the writer has written. And the reader knowing when the writer has written something could entail polling; e.g. to see if the file has been modified.

If you look at this in terms of just the data copying in the communication "channel"

  • With memory mapped files you copy (serialize) the data from application heap objects to the mapped buffer, and a second time (deserialize) from the mapped buffer to application heap objects.

  • With ordinary files there are two additional copies: 1) from the writing processes (non-mapped) buffer to kernel space pages (e.g. in the page cache), 2) from the kernel space pages to the reading processes (non-mapped) buffer.

The article below explains what is going on with conventional read / write and memory mapping. (It is in the context of copying a file and "zero-copy", but you can ignore that.)

Reference:

  • Zero Copy I: User-Mode Perspective
like image 87
Stephen C Avatar answered Oct 17 '22 14:10

Stephen C