Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to implement a reentrant locking mechanism through dispatch concurrent queue (GCD)?

I have just read this post, and its solution seems convincing:

  • Serial queue is used to synchronize access
  • dispatch_get_specific/dispatch_set_specific is used to provide reentrance capability.

What I am interested at is if it is possible to advance this scheme to implement reentrant locking mechanism for concurrent dispatch queue (each read is done using dispatch_sync, write is done using dispatch_barrier_async, like is described here, see "One Resource, Multiple Readers, and a Single Writer").

P.S. I think I've managed to implement this using [NSThread currentThread].threadDictionary here, but I don't like dealing with [NSThread currentThread] since I rely on GCD. Is it possible to replace the usage of [NSThread currentThread].threadDictionary with some tricky dispatch_set_specific/dispatch_get_specific code?

like image 449
Stanislav Pankevich Avatar asked Nov 25 '13 19:11

Stanislav Pankevich


People also ask

What is the GCD in iOS?

Overview. Dispatch, also known as Grand Central Dispatch (GCD), contains language features, runtime libraries, and system enhancements that provide systemic, comprehensive improvements to the support for concurrent code execution on multicore hardware in macOS, iOS, watchOS, and tvOS.

What is serial queue in GCD?

The main thread is a serial queue, but not all serial queues are the main thread. In the fourth point, the main thread would block, waiting for another thread to compete its work. If the serial queue was the main thread you'd get a deadlock.

What is GCD in iOS Swift?

Grand Central Dispatch (GCD) is a low-level API for managing concurrent operations. It can help improve your app's responsiveness by deferring computationally expensive tasks to the background. It's an easier concurrency model to work with than locks and threads.

How does queue dispatch work?

Dispatch queues are FIFO queues to which your application can submit tasks in the form of block objects. Dispatch queues execute tasks either serially or concurrently. Work submitted to dispatch queues executes on a pool of threads managed by the system.


1 Answers

You asked me in a comment on the linked post if I would comment on this question. Sorry I took so long, but I remember the first time I looked at it I didn't feel like I had anything productive say. But I was reminded of this topic today and came back across this question and figured I'd take a shot:

In general, I would suggest not going down this road at all. As I explained in the linked-to/from answer, implementing a recursive "lock" with dispatch_get/set_specific is never bulletproof, and going beyond the simple, serial case to the one-writer/many-readers semantic of dispatch_barrier_[a]sync isn't going to remove those problems, and would probably introduce even more issues.

As an aside, if you're just looking for an alternative to [NSThread threadDictionary] for thread-local storage, perhaps in the form of a non-Objective-C API, then you should use pthread_setspecific and pthread_getspecific. These are the lower level POSIX calls upon which [NSThread threadDictionary] is (almost certainly) built.

Stepping back for a minute: There is a pretty strong sentiment among veteran systems programmers that recursive locks are an anti-pattern from the get-go, and are to be avoided. Here is an interesting treatise on the subject. (If you aren't interested in the apocryphal tale of why recursive mutexes exist in POSIX, just search for "the objective facts" to jump to the part that's relevant to this question.) That piece is written in terms of more primitive "locks" (made up of mutexes and conditions), which are fundamentally different from queues, despite the fact that queues can be (sometimes quite usefully) adapted to simulate locks in some common cases. However, even though they're different, if you consider the criticisms Butenhof levies against recursive primitive locks, it quickly becomes evident that in many of the ways that recursive locks are "bad", using queues to simulate locks is worse. For instance, at the most basic level, the only way you can unlock a lock-simulated-by-a-queue is to return; there is no other way to release a queue-based lock. Calling out to other code which may need to recursively re-enter that lock, while the caller continues to hold it, is a potentially unbounded extension of the time the "lock" is held.

A piece of general advice that has served me well is, "Use the highest level abstraction that gets the job done." In the context of this question, that would translate to (setting aside the aforementioned criticisms of recursive locking for the moment): If you're working in Objective-C and, for whatever reason, you want recursive locks, just use @synchronized. When performance analysis tells you that your use of @synchronized is actually causing you a problem, then look into better solutions (with the foresight to know that "better solutions" will probably require moving away from recursive locks all together.)

In sum, trying to adapt GCD's concurrent queue barrier behavior to simulate a recursive reader/writer lock feels like a losing proposition. At best, it would always be subject to the limitations that I explained over here for the serial case. At worst, you're promulgating a pattern that ultimately reduces concurrency.

like image 189
ipmcc Avatar answered Jan 03 '23 13:01

ipmcc