Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Getting large rows out of SQL Azure - but where to go? Tables, Blob or something like MongoDB?

I read through a lot of comparisons between Azure Table/Blob/SQL storage and I think I have a good understanding of all of those ... but still, I'm unsure where to go for my specific needs. Maybe someone with experience in similar scenarios and is able to make a recommendation.

What I have

A SQL Azure DB that stores articles in raw HTML inside a varchar(max) column. Each row also has many metadata columns and many indexes for easy querying. The table contains many references to Users, Subscriptions, Tags and more - so a SQL DB will always be needed for my project.

What's the problem

I already have about 500,000 articles in this table and I expect it to grow by millions of articles per year. Each article's HTML content can be anywhere between a few KB and 1 MB or, in very few cases, larger than 1 MB.

Two problems arise: as Azure SQL storage is expensive, rather earlier than later I'll shoot myself in the head with the costs for storing this. Also, I will hit the 150 GB DB size limit also rather earlier than later. Those 500,000 articles already consume 1,6 GB DB space now.

What I want

It's clear those HTML content has to get out of the SQL DB. While the article table itself has to remain for joining it to users, subscriptions, tags and more for fast relational discovery of the needed articles, at least the colum that holds the HTML content could be outsourced to a cheaper storage.

At first sight, Azure Table storage seems like the perfect fit

Terabytes of data in one large table for very cheap prices and fast queries - sounds perfect to have a singe Table Storage table holding the article contents as an add-on to the SQL DB.

But reading through comparisons here shows it might not even be an option: 64 KB per column would be enough for 98 % of my articles, but there are those 2 % left where for some single articles even the whole 1 MB of the row limit might not be enough.

Blob storage sounds completely wrong, but ...

So there's just one option on Azure left: Blobs. Now, it might not be as wrong as it sounds. In most of the cases, I would need the content of only a single article at once. This should work fine and fast enough with Blob storage.

But I also have queries where I would need 50, 100 or even more rows at once INCLUDING even the content. So I would have to run the SQL query to fetch the needed articles and then fetch every single article out of the Blob storage. I have no experience with that but I can't believe I'd be able to remain in millisecond timespan for the queries when doing that. And queries that take multiple seconds are an absolute no-go for my project.

So it also does not seem to be to be an appropriate solution.

Do I look like a guy with a plan?

At least I have something like a plan. I thought about only "exporting" appropriate records into SQL Table Storage and/or Blob Storage.

Something like "as long as the content is < 64 KB export it to table storage, else keep it in the SQL table (or even export this single XL record into BLOB storage)"

That might work good enough. But it makes things complicated and maybe unnecessary error-prone.

Those other options

There are some other NoSQL DBs like MongoDB and CouchDB that seem to better fit my needs (at least from my naive point of view as someone who just read the specs on paper, I don't have experience with them). But they'd require self-hosting, some thing I'd like to get out of it's way if possible. I'm on Azure to do as little as needed in terms of self-hosting servers and services.

Did you really read until here?

Then thank you very much for your valuable time and thinking about my problems :)

Any suggestions would be greatly appreciated. As you see, I have my ideas and plans, but nothing beats experience from someone who walked down the road before :)

Thanks, Bernhard

like image 570
Bernhard Koenig Avatar asked May 23 '13 13:05

Bernhard Koenig


1 Answers

I signed up just solely to help with this question. In the past, I have found useful answers to my problems from Stackoverflow - thank you community - so I thought it would just be fair (perhaps fair is an understatement) to attempt to give something back with this question, as it falls on my alley.

In short, while considering all factors stated in the question, table storage may be the best option - iif you can properly estimate transactions per month: a nice article on this. You can solve the two limitations that you mentioned, row and column limit, by splitting (plain text method or serializing it) the document/html/data. Speaking from experience with 40 GB+ data stored in Table Storage, where frequently our app retrieves more than 10 rows per each page visit in milliseconds - no argument here! If you need 50+ rows at times, you are looking at low single digits second(s), or you can do them in parallel (and further by splitting the data in different partitions), or in some async fashion. Or, read suggested multi level caching below.

A bit more detail. I tried with SQL Azure, Blob (both page and block), and Table Storage. I can not speak for Mongo DB since, partially for the reasons already mentioned here, I did not want to go that route.

  • Table Storage is fast; in the range of 20-50 milliseconds, or even faster sometimes (depends, for instance in the same data center i have seen it gone as low as 10 milliseconds), when querying with partition and row key. You may also further have several partitions, in some fashion based on your data and your knowledge about it.
  • It scales better, in terms of GB's but not transactions
  • Row and column limitations that you mentioned are a burden, agreed, but not a show stopper. I have written my own solution to split entities, you can too easily, or you can see this already-written-solution (does not solve the whole problem but it is a good start): https://code.google.com/p/lokad-cloud/wiki/FatEntities
  • Also need to keep in mind that uploading data to table storage is time consuming, even when batching entities due to other limitations (i.e., request size less than 4 MB, upload bandwidth, etc).

But using solely just TableStorage may not be the best solution (thinking about growth and economics). The best solution that we ended up implementing used multi-level caching/storage, starting from static classes, Azure Role Based Cache, Table Storage, and Block Blobs. Lets call this, for readability purposes, level 1A, 1B, 2 and 3 respectively. Using this approach, we are using a medium single instance (2 CPU Cores and 3.5 GB Ram - my laptop has better performance), and are able to process/query/rank 100GB+ of data in seconds (95% of cases in under 1 second). I believe this is fairly impressive given that we check all "articles" before displaying them (4+ million "articles"). First, this is tricky and may or may not be possible in your case. I do not have sufficient knowledge about the data and its query/processing usage, but if you can find a way to organize the data well this may be ideal. I will make an assumption: it sounds like you are trying to search through and find relevant articles given some information about a user and some tags (a variant of a news aggregator perhaps, just got a hunch for that). This assumption is made for the sake of illustrating the suggestion, so even if not correct, I hope it will help you or trigger new ideas on how this could be adopted.

Level 1A data. Identify and add key entities or its properties in a static class (periodically, depending on how you foresee updates). Say we identify user preferences (e.g., demographics and interest, etc) and tags (tech, politics, sports, etc). This will be used to retrieve quickly who the user is, his/her preferences, and any tags. Think of these as key/value pair; for instance key being a tag, and its value being a list of article IDs, or a range of it. This solves a small piece of a problem, and that is: given a set of keys (user pref, tags, etc) what articles are we interested in! This data should be small in size, if organized properly (e.g., instead of storing article path, you can only store a number). *Note: the problem with data persistence in a static class is that application pool in Azure, by default, resets every 20 minutes or so of inactivity, thus your data in the static class is not persistent any longer - also sharing them across instances (if you have more than 1) can become a burden. Welcome level 1B to the rescue.

Leval 1B data A solution we used, is to keep layer 1A data in a Azure Cache, for its sole purpose to re-populate the static entity when and if needed. Level 1B data solves this problem. Also, if you face issues with application pool reset timing, you can change that programmatically. So level 1A and 1B have the same data, but one is faster than the other (close enough analogy: CPU Cache and RAM).

Discussing level 1A and 1B a bit One may point out that it is an overkill to use a static class and cache, since it uses more memory. But, the problem we found in practice, is that, first it is faster with static. Second, in cache there are some limitations (ie., 8 MB per object). With big data, that is a small limit. By keeping data in a static class one can have larger than 8 MB objects, and store them in cache by splitting them (i.e., currently we have over 40 splits). BTW please vote to increase this limit in the next release of azure, thank you! Here is the link: www.mygreatwindowsazureidea.com/forums/34192-windows-azure-feature-voting/suggestions/3223557-azure-preview-cache-increase-max-item-size

Level 2 data Once we get the values from the key/value entity (level 1A), we use the value to retrieve the data in Table Storage. The value should tell you what partition and Row Key you need. Problem being solved here: you only query those rows relevant to the user/search context. As you can see now, having level 1A data is to minimize row querying from table storage.

Level 3 data Table storage data can hold a summary of your articles, or the first paragraph, or something of that nature. When it is needed to show the whole article, you will get it from Blob. Table storage, should also have a column that uniquely identifies the full article in blob. In blob you may organize the data in the following manner:

  1. Split each article in separate files.
  2. Group n articles in one file.
  3. Group all articles in one file (not recommended although not as bad as the first impression one may get).

For the 1st option you would store, in table storage, the path of the article, then just grab it directly from Blob. Because of the above levels, you should need to read only a few full articles here.

For the 2nd and 3rd option you would store, in table storage, the path of the file and the start and end position from where to read and where to stop reading, using seek.

Here is a sample code in C#:

YourBlobClientWithReferenceToTheFile.Seek(TableStorageData.start, SeekOrigin.Begin);
        int numBytesToRead = (int)TableStorageData.end - (int)TableStorageData.start;
        int numBytesRead = 0;

        while (numBytesToRead > 0)
        {

          int n = YourBlobClientWithReferenceToTheFile.Read(bytes,numBytesRead,numBytesToRead);
            if (n == 0)
                break;
            numBytesRead += n;
            numBytesToRead -= n;
        }

I hope this didn't turn into a book, and hope it was helpful. Feel free to contact me if you have follow up questions or comments. Thanks!

like image 182
Merg Avatar answered Sep 23 '22 21:09

Merg