Usually we use ReadWriteLocks with read locks while reading, and write locks while writing. But a fancy case in which I thought using in reverse can help. But hopefully you guys can tell me a better way.
Here is what I want. There will be lot of writes, but sparingly low amount of read. Example is an average calculator of latency of requests, say for example.
Treat almost as pseudo code.
metric.addValue(latency); // Called a lot.
metric.getAverage(); // Called sparingly.
We can do the following:
addValue(value) {
atomicCount.increment();
atomicSum.increment(value);
}
getAverage() {
return atomicCount.get() != 0 ? atomicSum.get() / atomicCount.get() : 0.0;
}
The problem is in getAverage(), we "may" count a few extra counts. But most often probably correct values, and sometimes one extra count. But I just want it more precise.
Here is the trick:
ReadWriteLock rw = /* write preference, or a fair lock. */;
Lock read = rw.readLock();
Lock write = rw.writeLock();
addValue(value) {
read.lock(); // Using read lock when mutating.
try {
atomicCount.increment();
atomicSum.increment(value);
} finally {
read.unlock();
}
}
getAverage() {
write.lock(); // Using write lock when reading.
try {
return atomicCount.get() != 0 ? atomicSum.get() / atomicCount.get() : 0.0;
} finally {
write.unlock();
}
}
My question is, can I do better?
Salt: I know about (cast) issues, and calling count.get() multiple times etc can be avoided for better performance, but didn't want to clutter the code too much.
There's really no point for concurrent atomic increments; they can't be concurrent anyway.
The simplest solution - a simple lock, ordinary count/sum variables - will perform much better
lock
count++;
sum += value;
unlock
To be more parallel, we need "sharding" - each thread maintains its own stats; the reader queries them all for the whole picture. (the per-thread stats need to be volatile; reader uses Michael Burr's method to retrieve a stable version of the per-thread stats)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With