I have an AWS Lambda function, configured with only 128MB of memory, is triggered by SNS (which is itself triggered by S3) and will download the file from S3.
In my function, I have the following:
public class LambdaHandler { private final AmazonS3Client s3Client = new AmazonS3Client(); public void gdeltHandler(SNSEvent event, Context context) { System.out.println("Starting"); System.out.println("Found " + eventFiles.size() + " event files"); }
I've commented out and excluded from this post all of the logic because I am getting an OutOfMemoryError which I have isolated to the creation of the AmazonS3Client object. When I take that object out, I don't get the error. The exact above code results in the OutOfMemoryError.
I assigned 128MB of memory to the function, is that really not enough to simply grab the credentials and instantiate the AmazonS3Client object?
I've tried giving the AmazonS3Client constructor
new EnvironmentVariableCredentialsProvider()
as well as
new InstanceProfileCredentialsProvider()
with similar results.
Does the creation of the AmazonS3Client object simply require more memory?
Below is the stack trace:
Metaspace: java.lang.OutOfMemoryError java.lang.OutOfMemoryError: Metaspace at com.fasterxml.jackson.databind.deser.BeanDeserializerBuilder.build(BeanDeserializerBuilder.java:347) at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.buildBeanDeserializer(BeanDeserializerFactory.java:242) at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.createBeanDeserializer(BeanDeserializerFactory.java:143) at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer2(DeserializerCache.java:409) at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer(DeserializerCache.java:358) at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCache2(DeserializerCache.java:265) at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCacheValueDeserializer(DeserializerCache.java:245) at com.fasterxml.jackson.databind.deser.DeserializerCache.findValueDeserializer(DeserializerCache.java:143) at com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:439) at com.fasterxml.jackson.databind.ObjectReader._prefetchRootDeserializer(ObjectReader.java:1588) at com.fasterxml.jackson.databind.ObjectReader.(ObjectReader.java:185) at com.fasterxml.jackson.databind.ObjectMapper._newReader(ObjectMapper.java:558) at com.fasterxml.jackson.databind.ObjectMapper.reader(ObjectMapper.java:3108)
When I try providing the InstanceProfileCredentialsProvider or EnvironmentVariableCredentialsProvider, I get the following stack trace:
Exception in thread "main" java.lang.Error: java.lang.OutOfMemoryError: Metaspace at lambdainternal.AWSLambda.(AWSLambda.java:62) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at lambdainternal.LambdaRTEntry.main(LambdaRTEntry.java:94) Caused by: java.lang.OutOfMemoryError: Metaspace at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at lambdainternal.EventHandlerLoader$PojoMethodRequestHandler.makeRequestHandler(EventHandlerLoader.java:421) at lambdainternal.EventHandlerLoader.getTwoLengthHandler(EventHandlerLoader.java:777) at lambdainternal.EventHandlerLoader.getHandlerFromOverload(EventHandlerLoader.java:802) at lambdainternal.EventHandlerLoader.loadEventPojoHandler(EventHandlerLoader.java:888) at lambdainternal.EventHandlerLoader.loadEventHandler(EventHandlerLoader.java:740) at lambdainternal.AWSLambda.findUserMethodsImmediate(AWSLambda.java:126) at lambdainternal.AWSLambda.findUserMethods(AWSLambda.java:71) at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:219) at lambdainternal.AWSLambda.(AWSLambda.java:60) ... 3 more START RequestId: 58837136-483e-11e6-9ed3-39246839616a Version: $LATEST END RequestId: 58837136-483e-11e6-9ed3-39246839616a REPORT RequestId: 58837136-483e-11e6-9ed3-39246839616a Duration: 15002.92 ms Billed Duration: 15000 ms Memory Size: 128 MB Max Memory Used: 50 MB
2016-07-12T14:40:28.048Z 58837136-483e-11e6-9ed3-39246839616a Task timed out after 15.00 seconds
EDIT 1 If I increase the memory allocated to the function to even 192MB, it works just fine, though strangely enough, reports only using 59MB of memory in the cloudwatch logs. Am I simply losing the rest of the memory?
I have been observing this when using AWS Java SDK within the Lambda function. It would seem when creating any of the AWS clients (Sync or Async) you may get out of Metaspace.
I believe this is due to things that the Amazon Client is performing upon instantiation, including AmazonHttpClient creation as well as dynamic loading of request handler chains (part of AmazonEc2Client#init()
private method).
It is possible that the reported memory usage is for Heap itself, but may not include Metaspace. There are a few threads on AWS Forums but no responses from AWS on the matter.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With