What we do is take a request for an image like "media/catalog/product/3/0/30123/768x/lorem.jpg", then we use the original image located at "media/catalog/product/3/0/30123.jpg", resize it to 768px and webp if the browser supports that and then return the new image (if not already cached).
If you request: wysiwyg/lorem.jpg it will try to create a webp in maximum 1920 pixels (no enlargement).
This seems to work perfectly fine up to <= 1420 pixels wide image. However above that we only get HTTP 502: The Lambda function returned invalid json: The json output is not parsable.
There is a similar issue on SO that relates to GZIP, however as I understand you shouldn't really GZIP images: https://webmasters.stackexchange.com/questions/8382/gzipped-images-is-it-worth/57590#57590
But it's possible that the original image was uploaded to S3 GZIPPED already. But the gzip might be miss-leading because why would it work for smaller images then? We have GZIP disabled in Cloudfront.
I have given the Lamda@Edge Resize function maximum resources 3GB memory and timeout of 30 seconds.. Is this not sufficient for larger images?
I have deleted the already generated images, invalidated Cloudfront but it still behaves the same..
EDIT: UPDATE:
I simply tried a different image and then it works fine.. I have no idea why and how I should solve the broken image then... I guess Cloudfront has cached the 502 now.. I have invalidated using just "*" but didn't help.. Both original files are jpg.
The original source image for the working one is 6.1 MB and non working is 6.7 MB if that matters.
They have these limits: https://docs.aws.amazon.com/lambda/latest/dg/limits.html
The response.body is about 512 MB when it stops working.
The benefit of Lambda@Edge is that it uses the Amazon CloudFront content delivery network (CDN) to enable you to deliver function results globally. This is in contrast to Lambda, which requires you to provision instances in each location you want to operate from.
Restrictions on Lambda@Edge: Deployed package size must not exceed a total of 1MB. No environment variables are allowed. No Lambda layers are allowed.
Lambda@Edge functions are distributed globally, but they originate from one place. The reason is most likely that there needs to be a single source of truth, and they picked us-east-1.
Because of this automatic scaling, during the first minute of your traffic surge, Lambda@Edge's concurrent execution is limited to that predetermined amount. Additionally, when scaling out, cold starts increase function execution times up to orders of magnitude over simple functions.
There are some low limits in Lambda, especially in Lambda@Edge on the response size. The limit is 1 MB for the entire response, headers and body included. If lambda function returns a bigger response it will be truncated which can cause HTTP 500 statuses. See documentation.
You can overcome that by saving result image on S3 (or maybe checking first if it's already there), and then instead of returning it just making a 301 redirect to CloudFront distribution integrated with that bucket - so image request will be redirected to result image.
For example in node.js with Origin-Response trigger:
'use strict';
exports.handler = (event, context, callback) => {
// get response
const response = event.Records[0].cf.response;
const headers = response.headers;
// create image and save on S3, generate target_url
// ...
// modify response and headers
response.status = 301;
response.statusDescription = 'Moved Permanently';
headers['Location'] = [{key: 'Location', value: target_url}];
headers['x-reason'] = [{key: 'X-Reason', value: 'Generated.'}];
// return modified response
callback(null, response);
};
Version for simple Lambda Gateway (without Origin-Response, replaces headers):
exports.handler = (event, context, callback) => {
// create image and save on S3, generate target_url
// ...
var response = {
status: 301,
headers: {
Location: [{
key: 'Location',
value: [],
}],
'X-Reason': [{
key: 'X-Reason',
value: '',
}],
},
};
callback(null, response);
}
Additional notes to @Zbyszek's answer, you can roughly estimate if the request is bigger than 1MB like this:
const isRequestBiggerThan1MB = (body, responseWithoutBody) => {
const responseSizeWithoutBody = JSON.stringify(responseWithoutBody).length;
return body.length + responseSizeWithoutBody >= 1000 * 1000;
};
the responseWithoutBody can't be too large or contain "recursive keys" (or what it's called) but in this case I can't imagine that you would have that. If it contains recursive keys then you can simply remove those. If the responseWithoutBody is too large you need to remove those values and measure them separatly - for example like I am doing with the response.body.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With