Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Azure CosmosDB - Partition key reached maximum size of 10 GB

I have created a Cosmos DB collection with the partition key. Since it is a dev environment I have reduced the throughput to 1000. Now I'm getting the below error.

Message:

"Errors":["Partition key reached maximum size of 10 GB"]

Azure Cosmos DB containers can be created as fixed or unlimited. Fixed-size containers have a maximum limit of 10 GB and 10,000 RU/s throughput. To create a container as unlimited, you must specify a minimum throughput of 2,500 RU/s.

Now I have increased the throughput to 2500. But still, I'm getting the same error.

like image 409
Sarva Avatar asked Dec 30 '17 13:12

Sarva


2 Answers

UPDATE - 11 May, 2020

Microsoft has recently increased the capacity of a logical partition from 10 GB to 20 GB. Please see this for more details: https://docs.microsoft.com/en-us/azure/cosmos-db/concepts-limits


I emailed Aravind Krishna, who is an engineer on the Azure Cosmos DB team and asked for clarification on this point. This is a summary of his answer:

In Cosmos DB, there are physical and logical partitions. Within a Collection, all documents that share the same value for the partition key will live within the same logical partition. One or more logical partitions occupy a physical partition. As developers, the physical partitioning is irrelevant; we only have control over what belongs in a logical partition.

Regardless of whether a Collection is Fixed (10GB) or Unlimited, the 10GB limit applies to a logical partition. Period.

So Sarva, you will need to either rethink your partition key or implement rolling logs to ensure that data within your debug log partition doesn't exceed the 10GB partition limit.

like image 133
Rob Reagan Avatar answered Oct 12 '22 08:10

Rob Reagan


UPDATE - 11 May, 2020

Microsoft has recently increased the capacity of a logical partition from 10 GB to 20 GB. Please see this for more details: https://docs.microsoft.com/en-us/azure/cosmos-db/concepts-limits


The reason you're getting this error is because even though unlimited collection (a.k.a. partitioned collection) does not have size restriction a partition in that collection have and that is currently 10 GB. Since you have reached that limit for your partition, you're getting this error. From this link (Question 6):

It is important to choose a partition key property that has a number of distinct values, and lets you distribute your workload evenly across these values. As a natural artifact of partitioning, requests involving the same partition key are limited by the maximum throughput of a single partition. Additionally, the storage size for documents belonging to the same partition key is limited to 10GB. An ideal partition key is one that appears frequently as a filter in your queries and has sufficient cardinality to ensure your solution is scalable.

Only solution that I could think of is to recreate the collection and choose a partition key that you know will not exceed this 10 GB limit. You will need to transfer data from your old to new collection as well.

You may find this post useful in choosing a partition key for your collection: https://docs.microsoft.com/en-us/azure/cosmos-db/partition-data#design-for-partitioning.

Furthermore, per this blog post, the minimum RU/s for an unlimited collection is now 1000 instead of 2500.

like image 24
Gaurav Mantri Avatar answered Oct 12 '22 09:10

Gaurav Mantri