Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Double check lock optimization to implement thread-safe lazy-loading in Swift

I have implemented what I think is a double check locking in a class to achieve thread safe lazy loading.

Just in case you wondered, this is for a DI library I'm currently working on.

The code I'm talking about is the following:

final class Builder<I> {

   private let body: () -> I

   private var instance: I?
   private let instanceLocker = NSLock()

   private var isSet = false
   private let isSetDispatchQueue = DispatchQueue(label: "\(Builder.self)", attributes: .concurrent)

   init(body: @escaping () -> I) {
       self.body = body
   }

   private var syncIsSet: Bool {
       set {
          isSetDispatchQueue.async(flags: .barrier) {
             self.isSet = newValue
          }
       }
       get {
          var isSet = false
          isSetDispatchQueue.sync {
              isSet = self.isSet
          }
          return isSet
       }
   }

   var value: I {

       if syncIsSet {
           return instance! // should never fail
       }

       instanceLocker.lock()

       if syncIsSet {
           instanceLocker.unlock()
           return instance! // should never fail
       }

       let instance = body()
       self.instance = instance

       syncIsSet = true
       instanceLocker.unlock()

       return instance
    }
}

The logic is to allow concurrent reads of isSet so the accesses to instance can be run in parallel from different threads. To avoid race conditions (that's the part I'm not 100% sure), I have two barriers. One when setting isSet and one when setting instance. The trick is to unlock the later only after isSet is set to true, so the threads waiting on for instanceLocker to be unlocked gets locked a second time on isSet while it's being asynchronously written on the concurrent dispatch queue.

I think I'm very close from a final solution here, but since I'm not a distributed system expert, I'd like to make sure.

Also, using a dispatch queue wasn't my first choice because it makes me think reading isSet isn't super efficient, but again, I'm no expert.

So my two questions are:

  • Is this 100% thread-safe and if not, why?
  • Is there a more efficient way of doing this in Swift?
like image 242
trupin Avatar asked Jun 10 '18 00:06

trupin


Video Answer


1 Answers

IMO, the correct tool here is os_unfair_lock. The point of double-check locking is to avoid the expense of a full kernel lock. os_unfair_lock provides that in the uncontested case. The "unfair" part of it is that it doesn't make promises to waiting threads. If one thread unlocks, it is allowed to relock without another waiting thread getting a chance (and thus could starve). In practice with a very small critical section, this is not relevant (in this case you're just checking a local variable against nil). It is a lower-level primitive than dispatching to a queue, which is very fast, but not as fast as an unfair_lock, since it relies on primitives like unfair_lock.

final class Builder<I> {

    private let body: () -> I
    private var lock = os_unfair_lock()

    init(body: @escaping () -> I) {
        self.body = body
    }

    private var _value: I!
    var value: I {
        os_unfair_lock_lock(&lock)
        if _value == nil {
            _value = body()
        }
        os_unfair_lock_unlock(&lock)

        return _value
    }
}

Note that you were correct to do synchronization on syncIsSet. If you'd treated it as a primitive (as is common in other double-check synchronizations), then you'd be relying on things that Swift doesn't promise (both the atomicy of writing Bools and that it would actually check the boolean twice, since there's no volatile). Given that you're doing synchronization, the comparison is between os_unfair_lock and dispatching to a queue.

This said, in my experience this kind of laziness is almost always unwarranted in mobile apps. It only actually saves you time if the variable is very expensive, but likely never accessed. Sometimes in massively parallel systems, being able to move the initialization is worthwhile, but mobile apps live on a fairly limited number of cores, so there generally isn't some extra core lying around to shunt this off to. I generally would not pursue this unless you've already discovered that it is a significant problem when your framework is used in live systems. If you have, then I recommend profiling your approach against os_unfair_lock in real-world usages that show this problem. I expect os_unfair_lock to win.

like image 197
Rob Napier Avatar answered Oct 02 '22 20:10

Rob Napier