Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

AWS Lambda and inaccurate memory allocation

I realized that I need to allocate much more memory than needed to my AWS Lambda functions otherwise I get:

{
"errorMessage": "Metaspace",
"errorType": "java.lang.OutOfMemoryError"
}

For instance I have a Lambda function with 128MB allocated, it crashes all the time with that error whereas in the console it says "Max memory used 56 MB".
Then I allocate 256MB, it doesn't crash anymore but it always give me a "Max memory used" between 75 and 85MB.

How come? Thanks.

like image 399
Maxime Laval Avatar asked Feb 09 '16 17:02

Maxime Laval


People also ask

How much memory should I allocate to AWS Lambda?

Starting today, you can allocate up to 10 GB of memory to a Lambda function. This is more than a 3x increase compared to previous limits. Lambda allocates CPU and other resources linearly in proportion to the amount of memory configured. That means you can now have access to up to 6 vCPUs in each execution environment.

What happens when AWS Lambda runs out of memory?

What does this mean? You can save money by only allocating as much memory as you really need, but your function will fail if you go over that.

Why you should not use AWS Lambda?

You do not want to use Lambda for long-running workloads because it runs instances/functions for up to 15 minutes at a time. It limits concurrent function executions to 1,000. AWS Lambda bills can quickly run through your budget if you are unsure how to optimize AWS costs.


2 Answers

The amount of memory you allocate to your java lambda function is shared by heap, meta, and reserved code memory.

The java command executed by the container for a function allocated 256M is something like:

java -XX:MaxHeapSize=222823k -XX:MaxMetaspaceSize=26214k -XX:ReservedCodeCacheSize=13107k -XX:+UseSerialGC -Xshare:on -XX:-TieredCompilation -jar /var/runtime/lib/LambdaJavaRTEntry-1.0.jar

222823k + 26214k + 13107k = 256M

The java command executed by the container for a function allocated 384M is something like

java -XX:MaxHeapSize=334233k -XX:MaxMetaspaceSize=39322k -XX:ReservedCodeCacheSize=39322k -XX:+UseSerialGC -Xshare:on -XX:-TieredCompilation -jar /var/runtime/lib/LambdaJavaRTEntry-1.0.jar

334233k + 39322k + 39322k = 384M

So, the formula appears to be

85% heap + 10% meta + 5% reserved code cache = 100% of configured function memory

I honestly don't know how the "Max Memory Used" value reported in Cloudwatch logs is calculated. It doesn't align with anything that I'm seeing.

like image 193
jstell Avatar answered Sep 28 '22 05:09

jstell


What is happening here could be one of two things:

  1. The function is failing to reserve the additional memory and failing, causing the error you see, and keeping the memory low, as the request for more caused the JVM to crash.
  2. You're exhausting only the Metaspace, which @jstell points out is only 10% of the total memory, and you're only using 56MB of heap space.

When you go to a larger memory footprint, it is increasing your metaspace allocation, which enables the function to run.

like image 25
Ryan Gross Avatar answered Sep 28 '22 06:09

Ryan Gross