Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Azure blob storage throttling

We are trying to move some data from one of our blob storage accounts and we are getting throttled.

Initially, we were getting 9gbps but soon after we got throttled down to 1.1gbps.

We also started receiving errors saying that Azure forcibly closed the connection and we were getting network timeouts.

Has anyone experienced this or have any knowledge around increasing limits?

like image 847
joepour Avatar asked Dec 17 '18 09:12

joepour


People also ask

What is Azure storage throttling?

Throttling happens at two levels. Azure Resource Manager throttles requests for the subscription and tenant. If the request is under the throttling limits for the subscription and tenant, Resource Manager routes the request to the resource provider.

When Azure storage throttles your application the service begins to return?

When your application reaches the limit of what a partition can handle for your workload, Azure Storage begins to return error code 503 (Server Busy) or error code 500 (Operation Timeout) responses. If 503 errors are occurring, consider modifying your application to use an exponential backoff policy for retries.

Is there a limit on Azure storage?

There is no limit on file shares you can create within a premium account. In a Premium FileStorage account, storage size is limited to 100 TB. You can perform up to 100,000 I/O operations per second.


1 Answers

According to the offical document Storage limits of Azure subscription and service limits, quotas, and constraints, there are some limits about your scenario which can not around as below.

  1. Maximum request rate1 per storage account: 20,000 requests per second
  2. Max egress:
    • for general-purpose v2 and Blob storage accounts (all regions): 50 Gbps
    • for general-purpose v1 storage accounts (US regions): 20 Gbps if RA-GRS/GRS enabled, 30 Gbps for LRS/ZRS 2
    • for general-purpose v1 storage accounts (Non-US regions): 10 Gbps if RA-GRS/GRS enabled, 15 Gbps for LRS/ZRS 2
  3. Target throughput for single blob: Up to 60 MiB per second, or up to 500 requests per second

Considering for download data to local environment, except your network bandwidth and stablity, you have to compute the max concurrent number of requests per blob not over 500 and the total number of all requests not over 20,000 if you want to move data programmatically. So it's the key point for high concurrency controll.

If just move data inside Azure or not by programming, the best way is to use the offical transfer data tool AzCopy(for Windows or Linux) and Azure Data Factory. Then you will not need to consider for these limits and just wait for the move progress done.

Any concern, please feel free to let me know.

like image 119
Peter Pan Avatar answered Sep 23 '22 13:09

Peter Pan