I read EC2's docs: instance types, pricing, FAQ, burstable performance and also this about CPU credits. I even asked the following AWS support and the answer wasn't clear.
The thing is, according to the docs (although not too clear) and AWS support, all 3 instance types have the same performance while bursting, it's 100% usage of a certain type of CPU core.
So this is my thought process. Assuming t2.micro's RAM is enough and that the software can scale horizontally. Having 2 t2.micro has the same cost as 1 t2.small, assuming the load distributed equally between them (probably via AWS LB) they will use the same amount of total CPU and consume the same amount of CPU credits. If they were to fall back to baseline performance, it would be the same.
BUT, while they are bursting, 2 t2.micro can achieve x2 the performance of a t2.small (again, for the same cost). Same concept applies to t2.medium. Also using smaller instances allows for tigther auto (or manual) scaling which allows one to save money.
So my question is, given RAM and horizontal scale is not a problem, why would one use other than t2.micro.
EDIT: After some replies, here are a few notes about them:
In steady state, a t2. micro can sustainably handle 10-15 concurrent users. More than that and it will start to deplete its CPU Credit balance.
T2 instances are Burstable Performance Instances that provide a baseline level of CPU performance with the ability to burst above the baseline. The baseline performance and ability to burst are governed by CPU Credits. T2 instances accumulate CPU Credits when they are idle, and consume CPU Credits when they are active.
Your analysis seems correct.
While the processor type isn't clearly documented, I typically see my t2.micro instances equipped with one Intel Xeon E5-2670 v2 (Ivy Bridge) core, and my t2.medium instances have two of them.
The micro and small should indeed have the same burst performance for as long as they have a reasonable number of CPU credits remaining. I say "a reasonable number" because the performance is documented to degrade gracefully over a 15 minute window, rather than dropping off sharply like the t1.micro does.
Everything about the three classes (except the core, in micro vs small) multiplies by two as you step up: baseline, credits earned per hour, and credit cap. Arguably, the medium is very closely equivalent to two smalls when it comes to short term burst performance (with its two cores) but then again, that's also exactly the capability that you have with two micros, as you point out. If memory is not a concern, and traffic is appropriately bursty, your analysis is sensible.
While the t1 class was almost completely unsuited to a production environment, the same thing is not true of the t2 class. They are worlds apart.
If your code is tight and efficient with memory, and your workload is appropriate for the cpu credit-based model, then I concur with your analysis about the excellent value a t2.micro represents.
Of course, that's a huge "if." However, I have systems in my networks that fit this model perfectly -- their memory is allocated almost entirely at startup and their load is relatively light but significantly variable over the course of a day. As long as you don't approach exhaustion of your credit balances, there's nothing I see wrong with this approach.
There is a lot's of moving targets here. What are your instances are doing? You said the traffic varies over the day but not spiky. So if you wish to "Closely follow" the load with a small amount of t2.micro instances, you won't be able to use too much bursting, because at each upscaling you will have a low CPU credits. So if most of your instances are running only when they are under load, they will never collect CPU credits. Also you loose time and money with each startup time and the unused but started usage hours, so doing a too frequent up/down scaling isn't the most cost efficient. Last but not least, the operating system, other softwares has more or less a fix overhead, running it 2 times instead of one, may takes more resources away from your application in a system, where you gets CPU credits only under 20% of load.
If you want extreme cost efficiency, use spot instances.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With