Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to do Lazy Map deserialization in Haskell

Similar to this question by @Gabriel Gonzalez: How to do fast data deserialization in Haskell

I have a big Map full of Integers and Text that I serialized using Cerial. The file is about 10M.

Every time I run my program I deserialize the whole thing just so I can lookup an handful of the items. Deserialization takes about 500ms which isn't a big deal but I alway seem to like profiling on Friday.

It seems wasteful to always deserialize 100k to 1M items when I only ever need a few of them.

I tried decodeLazy and also changing the map to a Data.Map.Lazy (not really understanding how a Map can be Lazy, but ok, it's there) and this has no effect on the time except maybe it's a little slower.

I'm wondering if there's something that can be a bit smarter, only loading and decoding what's necessary. Of course a database like sqlite can be very large but it only loads what it needs to complete a query. I'd like to find something like that but without having to create a database schema.

Update

You know what would be great? Some fusion of Mongo with Sqlite. Like you could have a JSON document database using flat-file storage ... and of course someone has done it https://github.com/hamiltop/MongoLiteDB ... in Ruby :(

Thought mmap might help. Tried mmap library and segfaulted GHCI for the first time ever. No idea how can even report that bug.

Tried bytestring-mmap library and that works but no performance improvement. Just replacing this:

ser <- BL.readFile cacheFile

With this:

ser <- unsafeMMapFile cacheFile

Update 2

keyvaluehash may be just the ticket. Performance seems really good. But the API is strange and documentation is missing so it will take some experimenting.

Update 3: I'm an idiot

Clearly what I want here is not lazier deserialization of a Map. I want a key-value database and there's several options available like dvm, tokyo-cabinet and this levelDB thing I've never seen before.

Keyvaluehash looks to be a native-Haskell key-value database which I like but I still don't know about the quality. For example, you can't ask the database for a list of all keys or all values (the only real operations are readKey, writeKey and deleteKey) so if you need that then have to store it somewhere else. Another drawback is that you have to tell it a size when you create the database. I used a size of 20M so I'd have plenty of room but the actual database it created occupies 266M. No idea why since there isn't a line of documentation.

like image 350
Michael Fox Avatar asked Nov 11 '22 00:11

Michael Fox


1 Answers

One way I've done this in the past is to just make a directory where each file is named by a serialized key. One can use unsafeinterleaveIO to "thunk" the deserialized contents of each read file, so that values are only forced on read...

like image 179
sclv Avatar answered Nov 15 '22 07:11

sclv