Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

memory mapped files and pointers to volatile objects

My understanding of the semantics of volatile in C and C++ is that it turns memory access into (observable) side effects. Whenever reading or writing to a memory mapped file (or shared memory) I would expect the the pointer to be volatile qualified, to indicate that this is in fact I/O. (John Regehr wrote a very good article on the semantics of volatile).

Furthermore, I would expect using functions like memcpy() to access shared memory to be incorrect, since the signature suggests the volatile qualification is cast away, and the memory access not be treated as I/O.

In my mind, this is an argument in favor of std::copy(), where the volatile qualifier won't be cast away, and memory accesses being correctly treated as I/O.

However, my experience of using pointers to volatile objects and std::copy() to access memory mapped files is that it's orders of magnitude slower than just using memcpy(). I am tempted to conclude that perhaps clang and GCC are overly conservative in their treatment of volatile. Is that the case?

What guidance is there for accessing shared memory with regards to volatile, if I want to follow the letter of the standard and have it back the semantics I rely on?


Relevant quote from the standard [intro.execution] §14:

Reading an object designated by a volatile glvalue, modifying an object, calling a library I/O function, or calling a function that does any of those operations are all side effects, which are changes in the state of the execution environment. Evaluation of an expression (or a subexpression) in general includes both value computations (including determining the identity of an object for glvalue evaluation and fetching a value previously assigned to an object for prvalue evaluation) and initiation of side effects. When a call to a library I/O function returns or an access through a volatile glvalue is evaluated the side effect is considered complete, even though some external actions implied by the call (such as the I/O itself) or by the volatile access may not have completed yet.

like image 355
Arvid Avatar asked Aug 18 '17 10:08

Arvid


People also ask

Can pointer be volatile?

Yes, a pointer can be volatile if the variable that it points to can change unexpectedly even though how this might happen is not evident from the code. An example is an object that can be modified by something that is external to the controlling thread and that the compiler should not optimize.

What do you mean by mapping files into memory?

File mapping is the process of mapping the disk sectors of a file into the virtual memory space of a process. Once mapped, your app accesses the file as if it were entirely resident in memory.

What is a volatile object?

From the C standard's point of view, an object defined with a volatile type has externally visible behavior. You can think of such objects as having little oscilloscope probes attached to them, so that the user can observe some properties of accesses to them, just as the user can observe data written to output files.


2 Answers

I think that you're overthinking this. I don't see any reason for mmap or equivalent (I'll use the POSIX terminology here) memory to be volatile.

From the point of view of the compiler mmap returns an object that is modified and then given to msync or munmap or the implied unmap during _Exit. Those functions need to be treated as I/O, nothing else.

You could pretty much replace mmap with malloc+read and munmap with write+free and you would get most of the guarantees of when and how I/O is done.

Note that this doesn't even require the data to be fed back to munmap, it was just easier to demonstrate it this way. You can have mmap return a piece of memory and also save it internally in a list, then a function (let's call it msyncall) that doesn't have any arguments that writes out all the memory all calls to mmap previously returned. We can then build from that, saying that any function that performs I/O has an implicit msyncall. We don't need to go that far though. From the point of view of a compiler libc is a black box where some function returned some memory, that memory has to be in sync before any other call into libc because the compiler can't know which bits of memory that were previously returned from libc are still referenced and in active use inside.

The above paragraph is how it works in practice, but how can we approach it from the point of view of a standard? Let's look at a similar problem first. For threads the shared memory is only synchronized at some very specific function calls. This is quite important because modern CPUs reorder reads and writes and memory barriers are expensive and old CPUs could need explicit cache flushes before written data was visible by others (be it other threads, processes or I/O). The specification for mmap says:

The application must ensure correct synchronization when using mmap() in conjunction with any other file access method

but it doesn't specify how that synchronization is done. I know in practice that synchronization pretty much has to be msync because there are still systems out there where read/write are not using the same page cache as mmap.

like image 21
Art Avatar answered Nov 15 '22 13:11

Art


My understanding of the semantics of volatile in C and C++ is that it turns memory access into I/O

No it does not do that. All volatile does is to communicate from the programmer to the compiler that a certain memory area can be changed at any time, by "something else".

"Something else" might be a lot of different things. Examples:

  • Memory-mapped hardware register
  • Variable shared with an ISR
  • Variable updated from a callback function
  • Variable shared with another thread or process
  • Memory area updated through DMA

Since the standard (5.1.2.3) guarantees that an access (read/write) to a volatile object may not get optimized away, volatile can also be used to block certain compiler optimizations, which is mostly useful in hardware-related programming.

Whenever reading or writing to a memory mapped file (or shared memory) I would expect the the pointer to be volatile qualified

Not necessarily, no. The nature of the data doesn't matter, only how it is updated.

I would expect using functions like memcpy() to access shared memory to be incorrect

Overall it depends on your definition of "shared memory". This is a problem with your whole question, because you keep talking about "shared memory" which is not a formal, well-defined term. Memory shared with another ISR/thread/process?

Yes, memory shared with another ISR/thread/process might have to be declared as volatile, depending on compiler. But this is only becaue volatile can prevent a compiler from making incorrect assumptions and optimize code accessing such "shared" variables the wrong way. Something which was especially prone to happen on older embedded systems compilers. It shouldn't be necessary on modern hosted system compilers.

volatile does not lead to memory barrier behavior. It does not (necessarily) force expressions to get executed in a certain order.

volatile does certainly not guarantee any form of atomicity. This is why the _Atomic type qualifier was added to the C language.

So back to the copy issue - if the memory area is "shared" between several ISRs/threads/processes, then volatile won't help at all. Instead you need some means of synchronization, such as a mutex, semaphore or critical section.

In my mind, this is an argument in favor of std::copy(), where the volatile qualifier won't be cast away, and memory accesses being correctly treated as I/O.

No, this is just wrong, for the already mentioned reasons.

What guidance is there for accessing shared memory with regards to volatile, if I want to follow the letter of the standard and have it back the semantics I rely on?

Use system-specific API:s to protect the memory access, through mutex/semaphore/critical section.

like image 184
Lundin Avatar answered Nov 15 '22 13:11

Lundin