Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Guaranteeing the order of execution without using volatile or memory barrier and locks

I have Question regarding on compiler changing the order of execution. I am trying to improve performance of a multi-thread program (C language) by replacing the critical section with a signaling mechanism (thorugh semaphore).

I need to guarantee the order of execution here, and have been doing some research on this. I saw many questions on the order of execution within a function, but not much discussion on a function within a function.

Based on https://en.wikipedia.org/wiki/Sequence_point rules #4, would the below code chunk guarantees that *p->a has to be evaluated first before func2 is entered since func2 takes p as an input (assuming the compiler adheres to the rules of schedule point defined here)?

func1 (struct *p) {
  p->a = x;  
  func2 (p);
}

func2 (struct *p) {
  p->b = y;
  releaseSemaphore(s);
}

It is critical that p->b is set only after p->a is set as another thread is in a loop processing various request and identifies a valid request by whether p->b is set. Releasing semaphore only triggers the task if it is idle (and waiting for the semaphore), but if it is busy processing other requests, it will check p->b later, and we cannot guarantee that func1 is called only when that thread is idle.

like image 761
Dan Z Avatar asked Mar 10 '17 08:03

Dan Z


People also ask

What is a memory barrier instruction explain with an example the use of memory barrier?

The memory barrier instructions halt execution of the application code until a memory write of an instruction has finished executing. They are used to ensure that a critical section of code has been completed before continuing execution of the application code.

Is volatile a memory barrier?

The keyword volatile does not guarantee a memory barrier to enforce cache-consistency. Therefore, the use of volatile alone is not sufficient to use a variable for inter-thread communication on all systems and processors.

What is memory barrier in operating system?

A memory barrier is an instruction that requires the processor to apply an ordering constraint between memory operations that occur before and after the memory barrier instruction in the program. Such instructions are also known as memory fences in other architectures.

What is a compiler barrier?

Creates a hardware memory barrier (fence) that prevents the CPU from re-ordering read and write operations. It may also prevent the compiler from re-ordering read and write operations.


1 Answers

No. Sequence point ordering does not transition over thread boundaries. That is the whole point of why we need memory ordering guarantees in the first place.

The sequence point ordering is always guaranteed (modulo as-if-rule) for the thread which executes the code. Any other thread might observe the writes of that thread in an arbitrary order. This means that even if Thread #1 can verify that it performs writes in a certain order, Thread #2 might still observe them in a different order. That is why volatile is also not enough here.

Technically this can be explained eg. by caches. The writes by Thread #1 might go to a write buffer first, where they will still be invisible to Thread #2. Only once the write buffer is flushed back to main memory they become visible and the hardware is allowed to reorder the writes before flushing.

Note that just because the platform is allowed to reorder writes does not mean that it will. This is the dangerous part. Code that will run perfectly fine on one platform might break out of the blue when being ported to another. Using proper memory orderings guarantees that the code will work everywhere.

like image 54
ComicSansMS Avatar answered Sep 18 '22 00:09

ComicSansMS