I'd like to make sure I'm interpreting AWS's ECS Fargate pricing model correctly when compared to an m4.large EC2 instance (2vCPU, 8GB Mem) running non stop (even dropping to 1% cpu/mem utilization) for an entire month (730hrs).
# Monthly cost estimates
Fargate:
cpu = 730hrs * 2vCPU * $0.056 = $73.88
mem = 730hrs * 8GB Mem * $0.0127 = $74.17
total = $73.88 + $74.17 = $148.05
EKS ec2 node (1 yr reserved no upfront):
total = 730hrs * $0.062 = $45.26
EKS ec2 node (on demand):
total = 730hrs * $0.10 = $73.00
It appears Fargate would be ~3x as expensive as an EC2 instance. Does my Fargate pricing look accurate? I'm assuming Fargate isn't intended to be used for something like a 24/7 website, but more like a one off job, analogous perhaps to a Lamba function that runs a container image.
Am I correct that I'm billed for the entire Fargate task cpu & mem allocation, regardless if I'm utilizing 1% or 100% of the resources?
References:
Fargate is more expensive than EC2 when running the same workloads in most cases. However, there are ways to optimize AWS Fargate costs that make Fargate significantly cheaper. But cost should not be the only deciding factor. You should also consider your specific use case to select the best option.
With AWS Fargate, there are no upfront costs and you pay only for the resources you use. You pay for the amount of vCPU, memory, and storage resources consumed by your containerized applications running on Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS).
Perhaps most important are the upper and lower bounds. On the lower end, if you can't keep your ECS cluster reserved at a rate of 80%, you will almost certainly save money moving to Fargate. On the upper end, if you cluster is nearly 100% utilized, Fargate will cost between about 15% and 35% more.
Fargate is not on the free tier list. I suggest you use ECS with a single EC2 instance in the ECS cluster, which will work in a similar way. Why would it matter if your EC2 instance has low utilization?
Your calculations seem correct to me.
I run a bunch of 24/7 websites as 0.25vCPU and 0.5GB RAM Fargate tasks just because of the ease of setting them up. They don't have lots of traffic and they are cached pretty heavily, but if they need to they can scale to 10x based on target CPU.
Used that way I think they are pretty cost efficient.
Update: AWS updated Fargate prices January 7, 2019. The prices now are $0.04048 per vCPU per hour and $0.004445 per GB memory per hour. Your example would now be:
Fargate:
cpu = 730hrs * 2vCPU * $0.04048 = $59.10
mem = 730hrs * 8GB Mem * $0.004445 = $25.96
total = $59.10 + $25.96 = $85.06
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With