I have used to AWS EC2 to deploy a Python App which consuming data from Apache Kafka. And recently several day, I found that the steal time of CPU would become too high (about 35%) when the incoming data become large.
The EC2 instance used is t2.medium, 2 CPU and 4G memory. Anybody could tell me why this would be happen and is there any way to avoid it.
It would rather be difficult to comment on without looking at your application and metrics. My guess here is that T2 instances are burstable performance instances
They give a baseline CPU performance under normal conditions. But when the load is increased Burstable Performance Instances burst out which means to increase the CPU performance.
CPU Credit
tells the amount of burst of CPU in an instance. You can spend this CPU Credit to increase the CPU performance during the Burst period.
When you are out of CPU credits, it will degrade the overall performance, not just preventing you from bursting performance. In fact, you will observe almost 90+% CPU steal time, meaning that the hypervisor does not allow your instance on the CPU when you are out of credits. You can see more http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html#t2-instances-cpu-credits
Hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With