I've got a question and I would love to get an answer.
I've a Lambda function that uses Max memory of 31 MB and the allocated memory configured is 128MB it would take 2556.68 ms.
However, when increasing the allocated memory to max 1536 MB it takes only 621.81 ms to be fully executed and the MAX memory used is exactly the same 31 MB.
Why the Lambda function executed much faster when used memory is exactly the same in the two cases?
This is because the amount of CPU your containers are allocated is proportional to the amount of memory you ask for.
AWS Lambda allocates CPU power proportional to the memory by using the same ratio as a general purpose Amazon EC2 instance type, such as an M3 type. For example, if you allocate 256 MB memory, your Lambda function will receive twice the CPU share than if you allocated only 128 MB.
http://docs.aws.amazon.com/lambda/latest/dg/lambda-introduction-function.html
M3 instances have 8 Xeon E5-2670 or Xeon E5-2670 V2 hyperthreads per 30 GB of RAM, so at that ratio, with 1.5GB of memory, you have approximately 1.5 × (8 ÷ 30) × 2.6 GHz ≅ 1 GHz of CPU. At 128 MB you have only about 1/12 of that.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With