Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

dispatch_queue_set_specific vs. getting the current queue

I am trying to get my head around the difference and usage between these 2:

static void *myFirstQueue = "firstThread";

dispatch_queue_t firstQueue = dispatch_queue_create("com.year.new.happy", DISPATCH_QUEUE_CONCURRENT);

dispatch_queue_set_specific(firstQueue, myFirstQueue, (void*) myFirstQueue, NULL);

Question #1

What is the difference between this:

dispatch_sync(firstQueue, ^{

    if(dispatch_get_specific(myFirstQueue))
    {
        //do something here
    }
});

and the following:

dispatch_sync(firstQueue, ^{

    if(firstQueue == dispatch_get_current_queue())
    {
       //do something here
    }
});

?

Question #2:

Instead of using the above (void*) myFirstQueue in

dispatch_queue_set_specific(firstQueue, myFirstQueue, (void*) myFirstQueue, NULL);

Can we use a static int * myFirstQueue = 0; instead ?

My reasoning is based on the fact that:

dispatch_once_t is also 0 (is there any correlation here? By the way, I still don’t quite get why dispatch_once_t must be initialized to 0, although I have already read questions here on SO).

Question #3

Can you cite me an example of GCD Deadlock here?

Question #4

This might be a little too much to ask; I will ask anyway, in case someone happens to know this off on top of the head. If not, it would be OK to leave this part unanswered.

I haven’t tried this, because I really don’t know how. But my concept is this:

Is there anyway we can "place a handle" in some queue that enables us to still withhold a handle on it and thus be able to detect when a deadlock occurs after the queue is spun off; and when there is, and since we got a handle of queue we previously set, we could somehow do something to unlock the deadlock?

Again, if this is too much to answer, either that or if my reasoning is completely undoable / off here (in Question #4), feel free to leave this part unanswered.

Happy New Year.


@san.t

With static void *myFirstQueue = 0;

We do this:

dispatch_queue_set_specific(firstQueue, &myFirstQueue, &myFirstQueue, NULL);

Totally understandable.

But if we do:

static void *myFirstQueue = 1; 
//or any other number other than 0, it would be OK to revert back to the following?
dispatch_queue_set_specific(firstQueue, myFirstQueue, (void*) myFirstQueue, NULL);

Regarding dispatch_once_t:

Could you elaborate more on this:

Why must dispatch_once_t first be 0, and how and why would it need to act as a boolean at later stage? Does this have to do with memory / safety or the fact that the previous memory address was occupied by other objects that were not 0 (nil)?

As for Question #3:

Sorry, as I might not be completely clear: I didn’t mean I am experiencing a deadlock. I meant whether or not someone can show me a scenario in code with GCD that leads to a deadlock.

Lastly:

Hopefully you could answer Question #4. If not, as previously mentioned, it’s OK.

like image 541
Unheilig Avatar asked Dec 31 '13 17:12

Unheilig


2 Answers

First, I really don't think you meant to make that queue concurrent. dispatch_sync()ing to a concurrent queue is not really going to accomplish much of anything (concurrent queues do not guarantee ordering between blocks running on them). So, the rest of this answer assumes you meant to have a serial queue there. Also, I'm going to answer this in general terms, rather than your specific questions; hopefully that's ok :)

There are two fundamental issues with using dispatch_get_current_queue() this way. One very broad one that can be summarized as "recursive locking is a bad idea", and one dispatch-specific one that can be summarized as "you can and often will have more than one current queue".

Problem #1: Recursive locking is a bad idea

The usual purpose of a private serial queue is to protect an invariant of your code ("invariant" being "something that must be true"). For example, if you're using a queue to guard access to a property so that it's thread-safe, then the invariant is "this property does not have an invalid value" (for example: if the property is a struct, then half the struct could have a new value and half could have the old value, if it was being set from two threads at once. A serial queue forces one thread or the other to finish setting the whole struct before the other can start).

We can infer that for this to make sense, the invariant has to hold when beginning execution of a block on the serial queue (otherwise, it clearly wasn't protected). Once the block has begun executing, it can break the invariant (say, set the property) without fear of messing up any other threads as long as the invariant holds again by the time it returns (in this example, the property has to be fully set).

Summarizing just to make sure you're still following: at the beginning and end of each block on a serial queue, the invariant that queue is protecting must hold. In the middle of each block, it may be broken.

If, inside the block, you call something which tries to use the thing protected by the queue, then you've changed this simple rule to a much much more complicated one: instead of "at the beginning and end of each block" it's "at the beginning, end, and at any point where that block calls something outside of itself". In other words, instead of thinking about your thread-safety at the block level, you now have to examine every individual line of each block.

What does this have to do with dispatch_get_current_queue()? The only reason to use dispatch_get_current_queue() here is to check "are we already on this queue?", and if you're already on the current queue then you're already in the scary situation above! So don't do that. Use private queues to protect things, and don't call out to other code from inside them. You should already know the answer to "am I on this queue?" and it should be "no".

This is the biggest reason dispatch_get_current_queue() was deprecated: to stop people from trying to simulate recursive locking (what I've described above) with it.

Problem #2: You can have more than one current queue!

Consider this code:

dispatch_async(queueA, ^{
    dispatch_sync(queueB, ^{
        //what is the current queue here?
    });
});

Clearly queueB is current, but we're also still on queueA! dispatch_sync causes the work on queueA to wait for the completion of work on queueB, so they're both effectively "current".

This means that this code will deadlock:

dispatch_async(queueA, ^{
    dispatch_sync(queueB, ^{
        dispatch_sync(queueA, ^{});
    });
});

You can also have multiple current queues by using target queues:

dispatch_set_target_queue(queueB, queueA);
dispatch_sync(queueB, ^{
    dispatch_sync(queueA, ^{ /* deadlock! */ });
});

What would really be needed here is something like a hypothetical "dispatch_queue_is_synchronous_with_queue(queueA, queueB)", but since that would only be useful for implementing recursive locking, and I've already described how that's a bad idea... it's unlikely to be added.

Note that if you only use dispatch_async(), then you're immune to deadlocks. Sadly, you're not at all immune to race conditions.

like image 66
Catfish_Man Avatar answered Nov 02 '22 12:11

Catfish_Man


Question 1: The two code snip does the same thing, that is "does some work" when the block is indeed running in firstQueue. However they are using different ways to detect it is running on firstQueue, the first one sets a non NULL context ((void*)myFirstQueue) with a specific key (myFirstQueue) and later checks that the context is indeed non NULL; the second checks by using the now deprecated function dispatch_get_current_queue. The first method is preferred. But then it seems unnecessary to me, dispatch_sync guarantees the block to run in firstQueue already.

Question 2: just using static int * myFirstQueue = 0; is not ok, this way, myFirstQueue is a NULL pointer and dispatch_queue_set_specific(firstQueue, key, context, NULL); requires non NULL key & context to work. However it will work with minor changes like this:

static void *myFirstQueue = 0;
dispatch_queue_t firstQueue = dispatch_queue_create("com.year.new.happy", DISPATCH_QUEUE_CONCURRENT);
dispatch_queue_set_specific(firstQueue, &myFirstQueue, &myFirstQueue, NULL);

this would use the address of myFirstQueue variable as key and context.

If we do:

static void *myFirstQueue = 1; 
//or any other number other than 0, it would be OK to revert back to the following?
dispatch_queue_set_specific(firstQueue, myFirstQueue, (void*) myFirstQueue, NULL);

I guess it will be fine, as both myFirstQueue pointer won't be dereferenced provided the last destructor parameter is NULL

dispatch_once_t is also 0 has nothing to do with this. It is 0 at first and after it's dispatched once, it's value will change to non zero, essentially acting as a boolean.

Here are extracts from once.h, you can see that dispatch_once_t is actually a long and that Apple's implementation detail requires it to be initially 0, probably because static & global variables defaults to zero. And you can see that there is a line:

if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) {

essentially checking the once predicate is still zero before calling the dispatch_once function. It is not related to memory safety.

/*!
 * @typedef dispatch_once_t
 *
 * @abstract
 * A predicate for use with dispatch_once(). It must be initialized to zero.
 * Note: static and global variables default to zero.
 */
typedef long dispatch_once_t;

/*!
 * @function dispatch_once
 *
 * @abstract
 * Execute a block once and only once.
 *
 * @param predicate
 * A pointer to a dispatch_once_t that is used to test whether the block has
 * completed or not.
 *
 * @param block
 * The block to execute once.
 *
 * @discussion
 * Always call dispatch_once() before using or testing any variables that are
 * initialized by the block.
 */
#ifdef __BLOCKS__
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_once(dispatch_once_t *predicate, dispatch_block_t block);

DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
_dispatch_once(dispatch_once_t *predicate, dispatch_block_t block)
{
    if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) {
        dispatch_once(predicate, block);
    }
}
#undef dispatch_once
#define dispatch_once _dispatch_once
#endif

Question 3: assuming myQueue is serial, concurrent queues are ok.

dispatch_async(myQueue, ^{
    dispatch_sync(myQueue, ^{
        NSLog(@"This would be a deadlock");
    });
});

Question 4: Not sure about it.

like image 4
san.t Avatar answered Nov 02 '22 13:11

san.t