I realized that I need to allocate much more memory than needed to my AWS Lambda functions otherwise I get:
{
"errorMessage": "Metaspace",
"errorType": "java.lang.OutOfMemoryError"
}
For instance I have a Lambda function with 128MB allocated, it crashes all the time with that error whereas in the console it says "Max memory used 56 MB".
Then I allocate 256MB, it doesn't crash anymore but it always give me a "Max memory used" between 75 and 85MB.
How come? Thanks.
Starting today, you can allocate up to 10 GB of memory to a Lambda function. This is more than a 3x increase compared to previous limits. Lambda allocates CPU and other resources linearly in proportion to the amount of memory configured. That means you can now have access to up to 6 vCPUs in each execution environment.
What does this mean? You can save money by only allocating as much memory as you really need, but your function will fail if you go over that.
You do not want to use Lambda for long-running workloads because it runs instances/functions for up to 15 minutes at a time. It limits concurrent function executions to 1,000. AWS Lambda bills can quickly run through your budget if you are unsure how to optimize AWS costs.
The amount of memory you allocate to your java lambda function is shared by heap, meta, and reserved code memory.
The java command executed by the container for a function allocated 256M is something like:
java -XX:MaxHeapSize=222823k -XX:MaxMetaspaceSize=26214k -XX:ReservedCodeCacheSize=13107k -XX:+UseSerialGC -Xshare:on -XX:-TieredCompilation -jar /var/runtime/lib/LambdaJavaRTEntry-1.0.jar
222823k + 26214k + 13107k = 256M
The java command executed by the container for a function allocated 384M is something like
java -XX:MaxHeapSize=334233k -XX:MaxMetaspaceSize=39322k -XX:ReservedCodeCacheSize=39322k -XX:+UseSerialGC -Xshare:on -XX:-TieredCompilation -jar /var/runtime/lib/LambdaJavaRTEntry-1.0.jar
334233k + 39322k + 39322k = 384M
So, the formula appears to be
85% heap + 10% meta + 5% reserved code cache = 100% of configured function memory
I honestly don't know how the "Max Memory Used" value reported in Cloudwatch logs is calculated. It doesn't align with anything that I'm seeing.
What is happening here could be one of two things:
When you go to a larger memory footprint, it is increasing your metaspace allocation, which enables the function to run.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With