We are exploring aurora serverless as a database storage for one of our project. While doing the POC to fetch events from aurora serverless cluster we are receiving "Database returned more than the allowed response size limit" exception.
More details about database: Aurora PostgreSQL server less.
For doing POC We are using data APIs, which have these limitations.
import boto3;
client = boto3.client('rds-data')
import sys;
sql_statement = "select * from table_name limit 1000"
response = client.execute_statement(
database='mydb',
secretArn='<secret_arn',
resourceArn='<resource_arn>',
sql=sql_statement
)
print(response);
All Aurora Serverless v1 DB clusters have the following limitations: You can't export Aurora Serverless v1 snapshots to Amazon S3 buckets. You can't use AWS Database Migration Service and Change Data Capture (CDC) with Aurora Serverless v1 DB clusters.
You can now create Amazon Aurora database clusters with up to 128TB of storage. The new storage limit is available for both the MySQL- and PostgreSQL-compatible editions of Amazon Aurora. Previously, Aurora database instances supported 64TB of storage.
Even though Aurora Serverless offers the option to reduce capacity to zero during periods of no usage, this setting is not recommended for production environments. It takes a few seconds to reprovision compute capacity, which will deliver a poor experience to production users during that time frame.
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs) and will automatically attempt to recover your database in a healthy AZ with no data loss.
When you do a query in any Aurora Serverless instance through the Data API, you have two size limitations:
Currently, there are no solutions in the Data API to overcome these limits. In the Amazon Aurora User Guide, there is a recommended solution for the second problem in p.164:
In this case, the size of the result set returned by the database was too large. The Data API limit is 1 MB in the result set returned by the database. To solve this issue, make sure that calls to the Data API return 1 MB of data or less. If you need to return more than 1 MB, you can use multiple ExecuteStatement calls with the LIMIT clause in your query.
Considering it, you could do an exponential backoff until you find an acceptable LIMIT for your result set, or set a fixed LIMIT that you are comfortable that it will be always lower than 1 MB, even if your rows size increase in the future.
After defining how to set your LIMIT clause value (depending on whether your cluster uses MySQL 5.6, MySQL 5.7 or PostgreSQL 10.7), you could do a COUNT query to know how many results you will get, and then iterate until you execute COUNT / LIMIT
statements.
Another option would be to iterate until your statement response has fewer rows than your LIMIT.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With