I browsed the Amazon RDS pricing site today and now do want to know how they actually calculate the I/O rate? What does "$0.10 per 1 million requests" really mean?
Can anyone give some simple examples how many I/Os a simple query from EC2 to a MySQL on RDS produces?
You can view them by visiting cloudwatch, selecting RDS and then finding the ReadIOPS and WriteIOPS metrics for your database. Once the graph shows up, select the 1 minute granularity and “average” from the dropdown. By summing up the ReadIOPS and WriteIOPS you will see how much IOPS your operations consume.
IOPS – The number of I/O operations completed each second. This metric is reported as the average IOPS for a given time interval. Amazon RDS reports read and write IOPS separately on 1-minute intervals. Total IOPS is the sum of the read and write IOPS.
Based on your load - you don't need provisioned IOPs. Unless you're going to be needing in excess of 2500-3500 total IOPs, the standard storage will do at 300GB (because of EBS striping).
To check the current value for max_connections, run the following command after connecting to your Amazon RDS for PostgreSQL instance: postgres=> show max_connections; The default value of max_connections for both RDS for MySQL and RDS for PostgreSQL depends on the instance class used by the Amazon RDS instance.
In general it is a price for EBS storage service. Amazon claims something like this for EBS (section Projecting Costs):
As an example, a medium sized website database might be 100 GB in size and expect to average 100 I/Os per second over the course of a month. This would translate to $10 per month in storage costs (100 GB x $0.10/month), and approximately $26 per month in request costs (~2.6 million seconds/month x 100 I/O per second * $0.10 per million I/O).
If you have a running application on Linux, here is an article how to measure cost for EBS:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With