Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Happens-before for direct ByteBuffer

I have a direct ByteBuffer (off-heap) in one thread and safely publish it to a different thread using one of the mechanisms given to me by JMM. Does the happens-before relationship extend to the native (off-heap) memory wrapped by the ByteBuffer? If not how can I safely publish the contents of a direct ByteBuffer from one thread to a different one?

Edit

This is not a duplicate of Can multiple threads see writes on a direct mapped ByteBuffer in Java? because

  • I am not talking about an mmaped() region but a general off-heap area
  • I am safely publishing the ByteBuffer
  • I am not concurrently modifying the contents of the ByteBuffer, I am just hading it from one thread to a different one

Edit 2

This is not a duplicate of Options to make Java's ByteBuffer thread safe I am not trying to concurrently modify a ByteBuffer from two different threads. I am trying to hand if over from one thread to a different one and get happens-before semantics on the native memory region backed by a direct ByteBuffer. The first thread will no longer modify or read from the ByteBuffer once it has been handed over.

like image 659
Philippe Marschall Avatar asked Nov 01 '17 08:11

Philippe Marschall


1 Answers

Certainly if you read and write the ByteBuffer in Java code, using Java methods such as put and get, then the happens-before relationship between your modifications on the first thread, publishing/consumption, and finally subsequent access on the second thread will apply0 in the expected way. After all the fact that the ByteBuffer is backed by "off heap" memory is just an implementation detail: it doesn't allow the Java methods on ByteBuffer to break the memory model contract.

Things get a bit hazy if you are talking about writes to this byte buffer from native code you call through JNI or another mechanism. I think as long as you are using normal stores (i.e., not non-temporal stores or anything which has weak semantics than normal stores) in your native code, you will be fine in practice. After all the JMV internally implements stores to heap memory via the same mechanism, and in particular the get and put-type methods will be implemented with normal loads and stores. The publishing action, which generally involves some type of release-store will apply to all prior Java actions and also the stores inside your native code.

You can find some expert discussion on the concurrency mailing lists of more or less this topic. The precise question there is "Can I use Java locks to protect a buffer accessed only by native code", but the underlying concerns are pretty much the same. The conclusion seems consistent with the above: if you are safe if you do normal loads and stores to a normal1 memory area. If you want to use weaker instructions you'll need a fence.


0 So that was a bit of a lengthy, tortured sentence, but I wanted to make it clear that there is a whole chain of happens-before pairs that have to be correctly synchronized for this to work: (A) between the writes to the buffer and the publishing store on the first thread , (B) the publishing store and the consuming load (C) the consuming load and the subsequent reads or writes by the second thread. The pair (B) is purely in Java-land so follows the regular rules. The question is then mostly about whether (A) and (C), which have one "native" element, are also fine.

1Normal in this context more or less means the same type of memory area that Java uses, or at least one with as-strong consistency guarantees with respect to the type of memory Java uses. You have to go out of your way to violate this, and because you are using ByteBuffer you already know the area is allocated by Java and has to play by the normal rules (since the Java-level methods on the ByteBuffer need to work in a way consistent with the memory model, at least).

like image 164
BeeOnRope Avatar answered Oct 13 '22 22:10

BeeOnRope