The question: What solution or tips would you have to deal with a very large (multi terabytes) database indexed on strong hashes with high redundancy?
Some kind of inverted storage?
Is there something that could be done with Postgres?
I am ready to roll my own storage if needed.
(Hint: Must be open source, no Java, must run on Linux, must be disk-based, C/C++/Python preferred)
The details:
I need to create a very large database where each record has:
The volume of records is what I would qualify as quite large: several 10 to 100's billions). There is a significant redundancy of hashes across rows (over 40% of the records have their hash shared with at least another record, some hash exist in 100K records)
The primary usage is to lookup by hash, then retrieve the metadata. The secondary usage is to lookup by primary key, then retrieve the metadata.
This is an analytics-type database, so the overall load is medium, mostly read, few writes, mostly batched writes.
The current approach is to use Postgres, with an index on the primary key and an index on the hash column. The table is loaded in batch with the index on the hash turned off.
All indexes are btrees. The index on the hash column is growing huge, as big or bigger than the table itself. On a 120 GB table it takes about a day to recreate the index. The query performances are quite good though.
The problem is that the projected size for the target database will be over 4TB based on tests with a smaller data set of 400GB representing about 10% of the total target. Once loaded in Postgres, over 50% of the storage is unfortunately being used by the SQL index on the hash column.
This is way too big. And I feel that the redundancy in hashes is an opportunity for storing less.
Note also that while this describes the problem, there are a few of these tables that needs to be created.
The most recommended and best option is to have a STANDBY server, restore the backup of the production database on that server, and then run the DBCC command. If the consistency checks run ok on the standby database, the production database should be ok as it is the source of the standby.
Hash databases are frequently used to identify known good and known bad files. Text files of MD5 and SHA-1 hashes can be easily created and shared, but they are frequently not the most efficient to use to use when searching for a hash because they are in an unsorted order.
Approximate Hash Based Matching (AHBM), also known as Fuzzy Hashing, is used to identify complex and unstructured data that has a certain amount of byte-level similarity. Common use cases include the identification of updated versions of documents and fragments recovered from memory or deleted files.
You could create a table with only id and Hash, and your other data with index, Metadata, and hashId. Doing so, you can prevent writing the same hash up to 100k times in the table.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With