Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Serializiation with Protocol Buffers in a Schemaless Database

We're using MySQL to store schemaless data (see: Using a Relational Database for Schemaless Data for the solution inspired by how FriendFeed uses MySQL to store schemaless data).

One big table holds all entities for our application:

CREATE TABLE entities (
  added_id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY
, id BINARY(16) NOT NULL
, body MEDIUMBLOB
, UNIQUE KEY (id)
) ENGINE=InnoDB ;

A few details:

  • The only required property of stored entities is id, a 16-byte UUID. The rest of the entity is opaque to the database. We can change the "schema" simply by storing new properties in the body.

  • The added_id column is present because InnoDB stores data rows physically in primary key order. The AUTO_INCREMENT primary key ensures new entities are written sequentially on disk after old entities, which helps for read/write locality (new entities are read more frequently than old entities).

  • Our database stores our schemaless data in the body. <- This is the topic of this question.

  • Plenty of other interesting details, like "reaching into" the body data to build asynchronous materialized views (indexes are just tables that are built offline), but they're not relevant to the current discussion...

How should we be serializing the structured data (key-value pairs) in the body?

JSON or BSON would be simple, since the field names are repeated for each row. This gives it an advant­age in flex­ib­il­ity but also a big dis­ad­vant­age in space ef­fi­ciency (a per-row over­head for field names in the seri­al­ized data). We're trying to keep things in memory, and minimizing both memory and network footprint is important here. The more records we can fit in the same space, the faster our queries will be. We prefer relatively long, descriptive field names, and shortening them to make my database faster is wrong!

In the end, JSON/BSON is unworkable for our purposes, unless we get more complex and map small keys to more descriptive keys in the application driver that talks to the database. Which got us thinking...

Although our database is schemaless, in reality: 1) there aren't too many different kinds of entities, 2) versions of the same kind of entity don't change often, and 3) when they do change, it's usually just to add another field. JSON/BSON have no native support for versioning.

Protocol Buffers and Thrift are much more sophisticated when it comes to versioning and data definition changes. Both Thrift and Protocol Buffers are great candidates for serializing data into databases, and Thrift is designed so that the encoding format is extensible.

Protocol Buffers look like a great choice for serializing data in a schemaless database.

CouchDB and MongoDB (the two most popular schemaless databases?) use JSON and BSON respectively, but we can't find anything about using something more advanced, like Protocol Buffers, as a serialization format for storing schemaless data. There are products that store a specific language's version of objects (ie. storing Java's Externalizable objects in a datagrid, or doing NoSQL with MySQL in Ruby), but these are a pain (try accessing them from other platforms, or even from MySQL itself, and forget about versioning).

Is anyone storing the more interoperable Protocol Buffers in their database, or some other advanced serialization format in their schemaless database? This is a question of whether there are other options besides straightforward per-row serialization of JSON/BSON/XML, or serializing a specific language's objects. Is it even feasible? Are we missing something? sorry for the stream of consciousness style narrative!

like image 986
Évariste Galois Avatar asked Nov 25 '10 01:11

Évariste Galois


2 Answers

As you found out, MongoDB and CouchDB have strong opinions about how you store your data. If you're looking for a storage agnostic approach, you'll want to do something like @Joshua suggests and look at Cassandra or HBase. Even these two datastores have opinions about how data should be stored (they're both based on Google's Bigtable) and store data in column families.

Riak uses protocol buffers as one method of serializing data from your application into the datastore. It might be worth checking out to see if it fits your needs. It looks like you're largely planning to do single key lookups, so Riak may be a strong contender for your solution.

like image 123
Jeremiah Peschka Avatar answered Oct 10 '22 04:10

Jeremiah Peschka


You may want to look into something like Cassandra or HBase for storing your data. The issue with the opaque data blob is that you can't query based on it with your MySQL schema here. If you're looking for something, you'll have to read in every blob and check it. If that's really unimportant to how you're doing lookups (i.e. you always the the key), then I would suggest using protocol buffers to serialize the data, possibly compressing with zlib or LZO compression.

Protocol buffers allow you to create a simple data structure that can accept additional fields as your data evolves. Field names are stored as numbers and code to work with the structures is generated automatically from your .proto file. Performance is good and data sizes are kept quite small. You could optionally compress the data either using the MySQL compress() or one of the real time compression libraries summarized here (not just Java):

Fast compression in Java?

Hope this helps.

like image 23
Joshua Martell Avatar answered Oct 10 '22 02:10

Joshua Martell