Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

AWS Lambda's: SAM deployment ...identifying and removing old S3 package versions?

I'm relatively new to AWS lambda's and SAM, and now I've got things working I've got a seemingly simple question I can't find an answer to.

I've spent the last week getting a lambda app up and running using SAM (build, package, deploy numerous times until it works).

Problem

So now my S3 bucket I'm using to upload to has numerous (100 or so) previously uploaded (by sam package) versions of my zip'd up code.

Question

  1. How can you identify which zipped up packages are the current ones (ie used by a current function and/or layer), and remove all the old obsolete ones?
  2. Is there a way in SAM (cmd line options or in the template files) to have it automatically delete old versions of your package when you 'sam package' upload a new version?
  3. Is there somewhere in the AWS console to find the key for which zip file in your bucket a current function or layer is using? (I tried everywhere to find that, but couldn't manage to ...it's easy to get the ARN's, but not what the actual URI in your bucket that maps to)

Slight Complication

In the bucket I'm using to store the lambda packages, I've also got a custom layer.
So if it was just the app packages, I could easily (right now) just go in and delete everything in the bucket then do a re-build/package/deploy to clean it. ...but that would also delete my layer (and - same problem - I'm now sure which zip file in the bucket the layer is using).

But that approach wouldn't work long term anyway, as I'm planning to put together approx 10-15 different packages/functions, so deleting everything in the bucket when just one of them is updated is not going to work.

thanks for any thoughts, ideas and help!

like image 477
Richard Avatar asked Oct 15 '22 09:10

Richard


2 Answers

1.In your packaged.yaml (generated after invoking sam package) file you can see under each lambda function a CodeUri with unique path s3://your bucket/id . the id is the one used by the current function and/or layer and resides in your bucket. In layer it's ContentUri.

2.automatically delete old versions of your package when you 'sam package' upload a new version - i'm not aware of something like that.

3.Through AWS console you can see your layer version i don't think there is an indication of your function/layer CodeUri/ContentUri .

like image 159
Assael Azran Avatar answered Oct 20 '22 17:10

Assael Azran


You can try to compare the currently deployed stack with what you've stored in S3. Let's assume you have a stack called test-stack, then you can retrieve the processed stack from CloudFormation using the AWS CLI like this:

AWS_PAGER="" aws cloudformation get-template --stack-name test-stack \
  --output json --template-stage Processed

To only get the processed template body, you may want to pipe the output again through

jq -r ".TemplateBody"

Now you have the processed CFN template that tells you which S3 buckets and keys it is using. Here is an example for a lambda function:

MyLambda:
  Type: 'AWS::Lambda::Function'
  Properties:
    Code:
      S3Bucket: my-bucket
      S3Key: 0c53a7ccb1c1762eaeebd96555d13a20

You can then try to delete s3 objects that are not referenced by the current stack.

There used to be a github ticket requesting some sort of automatic cleanup mechanism but it has been closed as it was out of scope https://github.com/aws/serverless-application-model/issues/557#issuecomment-417867028

It may be worth noting that you could also try to setup a S3 lifecycle rule to automatically clean up old s3 objects as suggested here: https://github.com/aws/aws-sam-cli/issues/648 However, I don't think that this will always be a suitable solution.

Last but not least, there has been an attempt to include some automatic cleaning approach in the sam documentation, but it was dismissed as:

[...] there are certain use cases that require these packaged S3 objects to persist, and deleting them would cause significant problems. One such example is the "CloudFormation stack deployment rollback" scenario: 1) Deploy version N of a stack, 2) Delete the packaged S3 object that version N uses, 3) Deploy version N+1 with a "bad" template file that triggers a CloudFormation rollback.

https://github.com/awsdocs/aws-sam-developer-guide/pull/3#issuecomment-462993286

So while it is possible to identify obsolete S3 packaged versions, it might not always be a good idea to delete them after all...

like image 27
Jan Gassen Avatar answered Oct 20 '22 15:10

Jan Gassen