Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Painfully slow Azure table insert and delete batch operations

I am running into a huge performance bottleneck when using Azure table storage. My desire is to use tables as a sort of cache, so a long process may result in anywhere from hundreds to several thousand rows of data. The data can then be quickly queried by partition and row keys.

The querying is working pretty fast (extremely fast when only using partition and row keys, a bit slower, but still acceptable when also searching through properties for a particular match).

However, both inserting and deleting rows is painfully slow.

Clarification

I want to clarify that even inserting a single batch of 100 items takes several seconds. This isn't just a problem with total throughput of thousands of rows. It is affecting me when I only insert 100.

Here is an example of my code to do a batch insert to my table:

static async Task BatchInsert( CloudTable table, List<ITableEntity> entities )     {         int rowOffset = 0;          while ( rowOffset < entities.Count )         {             Stopwatch sw = Stopwatch.StartNew();              var batch = new TableBatchOperation();              // next batch             var rows = entities.Skip( rowOffset ).Take( 100 ).ToList();              foreach ( var row in rows )                 batch.Insert( row );              // submit             await table.ExecuteBatchAsync( batch );              rowOffset += rows.Count;              Trace.TraceInformation( "Elapsed time to batch insert " + rows.Count + " rows: " + sw.Elapsed.ToString( "g" ) );         }     } 

I am using batch operations, and here is one sample of debug output:

Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Starting asynchronous request to http://127.0.0.1:10002/devstoreaccount1. Microsoft.WindowsAzure.Storage Verbose: 4 : b08a07da-fceb-4bec-af34-3beaa340239b: StringToSign = POST..multipart/mixed; boundary=batch_6d86d34c-5e0e-4c0c-8135-f9788ae41748.Tue, 30 Jul 2013 18:48:38 GMT./devstoreaccount1/devstoreaccount1/$batch. Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Preparing to write request data. Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Writing request data. Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Waiting for response. Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Response received. Status code = 202, Request ID = , Content-MD5 = , ETag = . Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Response headers were processed successfully, proceeding with the rest of the operation. Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Processing response body. Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Operation completed successfully. iisexpress.exe Information: 0 : Elapsed time to batch insert 100 rows: 0:00:00.9351871 

As you can see, this example takes almost 1 second to insert 100 rows. The average seems to be about .8 seconds on my dev machine (3.4 Ghz quad core).

This seems ridiculous.

Here is an example of a batch delete operation:

Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Starting asynchronous request to http://127.0.0.1:10002/devstoreaccount1. Microsoft.WindowsAzure.Storage Verbose: 4 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: StringToSign = POST..multipart/mixed; boundary=batch_7e3d229f-f8ac-4aa0-8ce9-ed00cb0ba321.Tue, 30 Jul 2013 18:47:41 GMT./devstoreaccount1/devstoreaccount1/$batch. Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Preparing to write request data. Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Writing request data. Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Waiting for response. Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Response received. Status code = 202, Request ID = , Content-MD5 = , ETag = . Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Response headers were processed successfully, proceeding with the rest of the operation. Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Processing response body. Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Operation completed successfully. iisexpress.exe Information: 0 : Elapsed time to batch delete 100 rows: 0:00:00.6524402 

Consistently over .5 seconds.

I ran this deployed to Azure (small instance) as well, and have recorded times of 20 minutes to insert 28000 rows.

I am currently using the 2.1 RC version of the Storage Client Library: MSDN Blog

I must be doing something very wrong. Any thoughts?

UPDATE

I've tried parallelism with the net effect of an overall speed improvement (and 8 maxed out logical processors), but still barely 150 row inserts per second on my dev machine.

No better overall that I can tell, and maybe even worse when deployed to Azure (small instance).

I've increased the thread pool, and increase the max number of HTTP connections for my WebRole by following this advice.

I still feel that I am missing something fundamental that is limiting my inserts/deletes to 150 ROPS.

UPDATE 2

After analyzing the some diagnostics logs from my small instance deployed to Azure (using the new logging built in to the 2.1 RC Storage Client), I have a bit more information.

The first storage client log for a batch insert is at 635109046781264034 ticks:

caf06fca-1857-4875-9923-98979d850df3: Starting synchronous request to https://?.table.core.windows.net/.; TraceSource 'Microsoft.WindowsAzure.Storage' event 

Then almost 3 seconds later I see this log at 635109046810104314 ticks:

caf06fca-1857-4875-9923-98979d850df3: Preparing to write request data.; TraceSource 'Microsoft.WindowsAzure.Storage' event 

Then a few more logs which take up a combined 0.15 seconds ending with this one at 635109046811645418 ticks which wraps up the insert:

caf06fca-1857-4875-9923-98979d850df3: Operation completed successfully.; TraceSource 'Microsoft.WindowsAzure.Storage' event 

I'm not sure what to make of this, but it is pretty consistent across the batch insert logs that I examined.

UPDATE 3

Here is the code used to batch insert in parallel. In this code, just for testing, I am ensuring that I am inserting each batch of 100 into a unique partition.

static async Task BatchInsert( CloudTable table, List<ITableEntity> entities )     {         int rowOffset = 0;          var tasks = new List<Task>();          while ( rowOffset < entities.Count )         {             // next batch             var rows = entities.Skip( rowOffset ).Take( 100 ).ToList();              rowOffset += rows.Count;              string partition = "$" + rowOffset.ToString();              var task = Task.Factory.StartNew( () =>                 {                     Stopwatch sw = Stopwatch.StartNew();                      var batch = new TableBatchOperation();                      foreach ( var row in rows )                     {                         row.PartitionKey = row.PartitionKey + partition;                         batch.InsertOrReplace( row );                     }                      // submit                     table.ExecuteBatch( batch );                      Trace.TraceInformation( "Elapsed time to batch insert " + rows.Count + " rows: " + sw.Elapsed.ToString( "F2" ) );                 } );              tasks.Add( task );         }          await Task.WhenAll( tasks );     } 

As stated above, this does help improve the overall time to insert thousands of rows, but each batch of 100 still takes several seconds.

UPDATE 4

So I created a brand new Azure Cloud Service project, using VS2012.2, with the Web Role as a single page template (the new one with the TODO sample in it).

This is straight out of the box, no new NuGet packages or anything. It uses the Storage client library v2 by default, and the EDM and associated libraries v5.2.

I simply modified the HomeController code to be the following (using some random data to simulate the columns that I want to store in the real app):

public ActionResult Index( string returnUrl )     {         ViewBag.ReturnUrl = returnUrl;          Task.Factory.StartNew( () =>             {                 TableTest();             } );          return View();     }      static Random random = new Random();     static double RandomDouble( double maxValue )     {         // the Random class is not thread safe!         lock ( random ) return random.NextDouble() * maxValue;     }      void TableTest()     {         // Retrieve storage account from connection-string         CloudStorageAccount storageAccount = CloudStorageAccount.Parse(             CloudConfigurationManager.GetSetting( "CloudStorageConnectionString" ) );          // create the table client         CloudTableClient tableClient = storageAccount.CreateCloudTableClient();          // retrieve the table         CloudTable table = tableClient.GetTableReference( "test" );          // create it if it doesn't already exist         if ( table.CreateIfNotExists() )         {             // the container is new and was just created             Trace.TraceInformation( "Created table named " + "test" );         }           Stopwatch sw = Stopwatch.StartNew();          // create a bunch of objects         int count = 28000;         List<DynamicTableEntity> entities = new List<DynamicTableEntity>( count );          for ( int i = 0; i < count; i++ )         {             var row = new DynamicTableEntity()             {                 PartitionKey = "filename.txt",                 RowKey = string.Format( "$item{0:D10}", i ),             };              row.Properties.Add( "Name", EntityProperty.GeneratePropertyForString( i.ToString() ) );             row.Properties.Add( "Data", EntityProperty.GeneratePropertyForString( string.Format( "data{0}", i ) ) );             row.Properties.Add( "Value1", EntityProperty.GeneratePropertyForDouble( RandomDouble( 10000 ) ) );             row.Properties.Add( "Value2", EntityProperty.GeneratePropertyForDouble( RandomDouble( 10000 ) ) );             row.Properties.Add( "Value3", EntityProperty.GeneratePropertyForDouble( RandomDouble( 1000 ) ) );             row.Properties.Add( "Value4", EntityProperty.GeneratePropertyForDouble( RandomDouble( 90 ) ) );             row.Properties.Add( "Value5", EntityProperty.GeneratePropertyForDouble( RandomDouble( 180 ) ) );             row.Properties.Add( "Value6", EntityProperty.GeneratePropertyForDouble( RandomDouble( 1000 ) ) );              entities.Add( row );         }          Trace.TraceInformation( "Elapsed time to create record rows: " + sw.Elapsed.ToString() );          sw = Stopwatch.StartNew();          Trace.TraceInformation( "Inserting rows" );          // batch our inserts (100 max)         BatchInsert( table, entities ).Wait();          Trace.TraceInformation( "Successfully inserted " + entities.Count + " rows into table " + table.Name );         Trace.TraceInformation( "Elapsed time: " + sw.Elapsed.ToString() );          Trace.TraceInformation( "Done" );     }               static async Task BatchInsert( CloudTable table, List<DynamicTableEntity> entities )     {         int rowOffset = 0;          var tasks = new List<Task>();          while ( rowOffset < entities.Count )         {             // next batch             var rows = entities.Skip( rowOffset ).Take( 100 ).ToList();              rowOffset += rows.Count;              string partition = "$" + rowOffset.ToString();              var task = Task.Factory.StartNew( () =>             {                 var batch = new TableBatchOperation();                  foreach ( var row in rows )                 {                     row.PartitionKey = row.PartitionKey + partition;                     batch.InsertOrReplace( row );                 }                  // submit                 table.ExecuteBatch( batch );                  Trace.TraceInformation( "Inserted batch for partition " + partition );             } );              tasks.Add( task );         }          await Task.WhenAll( tasks );     } 

And this is the output I get:

iisexpress.exe Information: 0 : Elapsed time to create record rows: 00:00:00.0719448 iisexpress.exe Information: 0 : Inserting rows iisexpress.exe Information: 0 : Inserted batch for partition $100 ... iisexpress.exe Information: 0 : Successfully inserted 28000 rows into table test iisexpress.exe Information: 0 : Elapsed time: 00:01:07.1398928 

This is a bit faster than in my other app, at over 460 ROPS. This is still unacceptable. And again in this test, my CPU (8 logical processors) is nearly maxed out, and disk access is nearly idle.

I am at a loss as to what is wrong.

UPDATE 5

Round and round of fiddling and tweaking have yielded some improvements, but I just can't get it much faster than 500-700(ish) ROPS doing batch InsertOrReplace operations (in batches of 100).

This test is done in the Azure cloud, using a small instance (or two). Based on comments below I'm resigned to the fact that local testing will be slow at best.

Here are a couple of examples. Each example is it's very own PartitionKey:

Successfully inserted 904 rows into table org1; TraceSource 'w3wp.exe' event Elapsed time: 00:00:01.3401031; TraceSource 'w3wp.exe' event  Successfully inserted 4130 rows into table org1; TraceSource 'w3wp.exe' event Elapsed time: 00:00:07.3522871; TraceSource 'w3wp.exe' event  Successfully inserted 28020 rows into table org1; TraceSource 'w3wp.exe' event Elapsed time: 00:00:51.9319217; TraceSource 'w3wp.exe' event 

Maybe it's my MSDN Azure account that has some performance caps? I don't know.

At this point I think I'm done with this. Maybe it's fast enough to use for my purposes, or maybe I'll follow a different path.

CONCLUSION

All answers below are good!

For my specific question, I've been able to see speeds up to 2k ROPS on a small Azure instance, more typically around 1k. Since I need to keep costs down (and therefore instance sizes down), this defines what I will be able to use tables for.

Thanks everyone for all the help.

like image 520
Keith Murray Avatar asked Jul 30 '13 19:07

Keith Murray


1 Answers

basic concept - use paralleism to speed this up.

step 1 - give your threadpool enough threads to pull this off - ThreadPool.SetMinThreads(1024, 256);

step 2 - use partitions. I use guids as Ids, i use the last to characters to split into 256 unique partitons (actually I group those into N subsets in my case 48 partitions)

step 3 - insert using tasks, i use object pooling for table refs

public List<T> InsertOrUpdate(List<T> items)         {             var subLists = SplitIntoPartitionedSublists(items);              var tasks = new List<Task>();              foreach (var subList in subLists)             {                 List<T> list = subList;                 var task = Task.Factory.StartNew(() =>                     {                         var batchOp = new TableBatchOperation();                         var tableRef = GetTableRef();                          foreach (var item in list)                         {                             batchOp.Add(TableOperation.InsertOrReplace(item));                         }                          tableRef.ExecuteBatch(batchOp);                         ReleaseTableRef(tableRef);                     });                 tasks.Add(task);             }              Task.WaitAll(tasks.ToArray());              return items;         }  private IEnumerable<List<T>> SplitIntoPartitionedSublists(IEnumerable<T> items)         {             var itemsByPartion = new Dictionary<string, List<T>>();              //split items into partitions             foreach (var item in items)             {                 var partition = GetPartition(item);                 if (itemsByPartion.ContainsKey(partition) == false)                 {                     itemsByPartion[partition] = new List<T>();                 }                 item.PartitionKey = partition;                 item.ETag = "*";                 itemsByPartion[partition].Add(item);             }              //split into subsets             var subLists = new List<List<T>>();             foreach (var partition in itemsByPartion.Keys)             {                 var partitionItems = itemsByPartion[partition];                 for (int i = 0; i < partitionItems.Count; i += MaxBatch)                 {                     subLists.Add(partitionItems.Skip(i).Take(MaxBatch).ToList());                 }             }              return subLists;         }          private void BuildPartitionIndentifiers(int partitonCount)         {             var chars = new char[] { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f' }.ToList();             var keys = new List<string>();              for (int i = 0; i < chars.Count; i++)             {                 var keyA = chars[i];                 for (int j = 0; j < chars.Count; j++)                 {                     var keyB = chars[j];                     keys.Add(string.Concat(keyA, keyB));                 }             }               var keySetMaxSize = Math.Max(1, (int)Math.Floor((double)keys.Count / ((double)partitonCount)));             var keySets = new List<List<string>>();              if (partitonCount > keys.Count)             {                 partitonCount = keys.Count;             }              //Build the key sets             var index = 0;             while (index < keys.Count)             {                 var keysSet = keys.Skip(index).Take(keySetMaxSize).ToList();                 keySets.Add(keysSet);                 index += keySetMaxSize;             }              //build the lookups and datatable for each key set             _partitions = new List<string>();             for (int i = 0; i < keySets.Count; i++)             {                 var partitionName = String.Concat("subSet_", i);                 foreach (var key in keySets[i])                 {                     _partitionByKey[key] = partitionName;                 }                 _partitions.Add(partitionName);             }          }          private string GetPartition(T item)         {             var partKey = item.Id.ToString().Substring(34,2);             return _partitionByKey[partKey];         }          private string GetPartition(Guid id)         {             var partKey = id.ToString().Substring(34, 2);             return _partitionByKey[partKey];         }          private CloudTable GetTableRef()         {             CloudTable tableRef = null;             //try to pop a table ref out of the stack             var foundTableRefInStack = _tableRefs.TryPop(out tableRef);             if (foundTableRefInStack == false)             {                 //no table ref available must create a new one                 var client = _account.CreateCloudTableClient();                 client.RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(1), 4);                 tableRef = client.GetTableReference(_sTableName);             }              //ensure table is created             if (_bTableCreated != true)             {                 tableRef.CreateIfNotExists();                 _bTableCreated = true;             }              return tableRef;         } 

result - 19-22kops storage account maximum

hit me up if your interested in the full source

need moar? use multiple storage accounts!

this is from months of trial and error, testing, beating my head against a desk. I really hope it helps.

like image 55
JTtheGeek Avatar answered Sep 17 '22 23:09

JTtheGeek