It seems there's not such interface..
Do I have to iterate all keys to get the count?
What is the design purpose of that? Or what is the limitation of implement this feature?
LevelDB writes keys and values at least twice. A single batch of N writes may be significantly faster than N individual writes. LevelDB's performance improves greatly with more memory, a larger write buffer reduces the need to merge sorted files (since it creates a smaller number of larger sorted files).
The LevelDB site provides some insight into these performance benefits. When creating a brand new database, various methods shows a range of speeds from . 4 MB/s to 62.7 MB/s in Write performance. In Read performance, LevelDB ranged from 152 MB/s to 232 MB/s.
LevelDB is a key/value store built by Google. It can support an ordered mapping from string keys to string values. The core storage architecture of LevelDB is a log-structured merge tree (LSM), which is a write-optimized B-tree variant. It is optimized for large sequential writes as opposed to small random writes.
"There is no way to implement Count more efficiently inside leveldb than outside." states offical issue 113
Looks like there is no better way to do it, except for either iterating through the whole dataset or implementing your own in-application on-write counter.
Probably when LevelDB was built, this API was not required for the original authors. Sadly LevelDB does not have an increment API which you can use to record counting. What you can do right now is read and write a key in Leveldb, but this is not thread safe.
May be you could have a look at Redis, if it is better suited for your use case.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With