What's the best way / least wait-causing way to synchronize read/write access to an instance variable in objective-c for iOS?
The variable gets read and written very often (let's say 1000 times per second read and written). It is not important that changes take effect immediately. It is not even important that reads get consistent data with one-another, but writes must sooner or later be reflected in the data acquired by reads. Is there some data structure which allows this?
I thought of this:
v[0]
and v[1]
.v[i]
, create a concurrent dispatch queue for constructing a readers-writer-locking mechanism around it. Let's call them q[i]
.v[0]
gets written to, adhering to the locking mechanism with q[0]
.v[1]
is read and only at a certain chance, e.g. 1%, the read operation looks into v[0]
and updates v[1]
if necessary.The following pseudo-code illustrates this:
typedef int VType; // the type of the variable
VType* v; // array of first and second variable
dispatch_queue_t* q; // queues for synchronizing access to v[i]
- (void) setV:(VType)newV {
[self setV:newV at:0];
}
- (void) setV:(VType)newV at:(int)i {
dispatch_barrier_async(q[i], ^{
v[i] = newV;
});
}
- (VType) getV:(int)i {
__block VType result;
dispatch_sync(q[i], ^{
result = v[i];
});
return result;
}
- (VType) getV {
VType result = [self getV:1];
if ([self random] < 0.01) {
VType v0_result = [self getV:0];
if (v0_result != result) {
[self setV:v0_result at:1];
result = v0_result;
}
}
return result;
}
- (float) random {
// some random number generator - fast, but not necessarily good
}
This has the following benefits:
v[0]
is usually not occupied with a read operation. Therefor, a write operation usually does not block.
At most times, v[1]
does not get written to, thus read operations on this one usually don't block.
Still, if many read operations occur, eventually the written values are propagated from v[0]
into v[1]
. Some values might be missed, but that doesn't matter for my application.
What do you guys think, does this work? Are there better solutions?
UPDATE:
Some performance benchmarking (reads and writes of one benchmark at a time are done as quickly as possible concurrently for 1 second, one reading queue, one writing queue):
On iPhone 4S with iOS 7:
runMissingSyncBenchmark: 484759 w/s
runMissingSyncBenchmark: 489558 r/s
runConcurrentQueueRWSyncBenchmark: 2303 w/s
runConcurrentQueueRWSyncBenchmark: 2303 r/s
runAtomicPropertyBenchmark: 460479 w/s
runAtomicPropertyBenchmark: 462145 r/s
In Simulator with iOS 7:
runMissingSyncBenchmark: 16303208 w/s
runMissingSyncBenchmark: 12239070 r/s
runConcurrentQueueRWSyncBenchmark: 2616 w/s
runConcurrentQueueRWSyncBenchmark: 2615 r/s
runAtomicPropertyBenchmark: 4212703 w/s
runAtomicPropertyBenchmark: 4300656 r/s
So far, atomic property wins. Tremendously. This was tested with an SInt64
.
I expected that the approach with the concurrent queue is similar in performance to the atomic property, as it is the standard approach for an r/w-sync mechanism.
Of course, the runMissingSyncBenchmark
sometimes produces reads which show that a write of the SInt64
is halfway done.
Perhaps, a spinlock
will be optimal (see man 3 spinlock).
Since a spin lock can be tested if it is currently locked (which is a fast operation) the reader task could just return the previous value if the spin lock is held by the writer task.
That is, the reader task uses OSSpinLockTry()
and retrieves the actual value only if the lock could be obtained. Otherwise, the read task will use the previous value.
The writer task will use OSSpinLockLock()
and OSSpinLockUnlock()
respectively in order to atomically update the value.
From the man page:
NAME OSSpinLockTry, OSSpinLockLock, OSSpinLockUnlock -- atomic spin lock synchronization primitives
SYNOPSIS
#include <libkern/OSAtomic.h> bool OSSpinLockTry(OSSpinLock *lock); void OSSpinLockLock(OSSpinLock *lock); void OSSpinLockUnlock(OSSpinLock *lock);
DESCRIPTION
Spin locks are a simple, fast, thread-safe synchronization primitive that is suitable in situations where contention is expected to be low. The spinlock operations use memory barriers to synchronize access to shared memory protected by the lock. Preemption is possible while the lock is held.
OSSpinLock
is an integer type. The convention is that unlocked is zero, and locked is nonzero. Locks must be naturally aligned and cannot be in cache-inhibited memory.
OSSpinLockLock()
will spin if the lock is already held, but employs various strategies to back off, making it immune to most priority-inversion livelocks. But because it can spin, it may be inefficient in some situations.
OSSpinLockTry()
immediately returns false if the lock was held, true if it took the lock. It does not spin.
OSSpinLockUnlock()
unconditionally unlocks the lock by zeroing it.RETURN VALUES
OSSpinLockTry()
returns true if it took the lock, false if the lock was already held.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With