I'm trying to create a stack with CloudFormation. The stack needs to take some data files from a central S3 bucket and copy them to it's own "local" bucket.
I've written a lambda function to do this, and it works when I run it in the Lambda console with a test event (the test event uses the real central repository and successfully copies the file to a specified repo).
My current CloudFormation script does the following things:
It's at step 4 where it starts to go wrong - the Cloudformation execution seems to freeze here (CREATE_IN_PROGESS
). Also, when I try to delete the stack, it seems to just get stuck on DELETE_IN_PROGRESS
instead.
Here's how I'm invoking the Lambda function in the CloudFormation script:
"DataSync": {
"Type": "Custom::S3DataSync",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : [ "S3DataSync", "Arn" ] },
"InputFile": "data/provided-as-ip-v6.json",
"OutputFile": "data/data.json"
}
},
"KeySync1": {
"Type": "Custom::S3DataSync",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : [ "S3DataSync", "Arn" ] },
"InputFile": "keys/1/public_key.pem"
}
},
"KeySync2": {
"Type": "Custom::S3DataSync",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : [ "S3DataSync", "Arn" ] },
"InputFile": "keys/2/public_key.pem"
}
}
And the Lambda function itself:
exports.handler = function(event, context) {
var buckets = {};
buckets.in = {
"Bucket":"central-data-repository",
"Key":"sandbox" + "/" + event.ResourceProperties.InputFile
};
buckets.out = {
"Bucket":"sandbox-data",
"Key":event.ResourceProperties.OutputFile || event.ResourceProperties.InputFile
};
var AWS = require('aws-sdk');
var S3 = new AWS.S3();
S3.getObject(buckets.in, function(err, data) {
if (err) {
console.log("Couldn't get file " + buckets.in.Key);
context.fail("Error getting file: " + err)
}
else {
buckets.out.Body = data.Body;
S3.putObject(buckets.out, function(err, data) {
if (err) {
console.log("Couln't write to S3 bucket " + buckets.out.Bucket);
context.fail("Error writing file: " + err);
}
else {
console.log("Successfully copied " + buckets.in.Key + " to " + buckets.out.Bucket + " at " + buckets.out.Key);
context.succeed();
}
});
}
});
}
You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes. Key Features: Same low latency and high throughput performance of S3 Standard.
If you specify a template file stored locally, CloudFormation uploads it to an S3 bucket in your AWS account. CloudFormation creates a bucket for each region in which you upload a template file. The buckets are accessible to anyone with Amazon Simple Storage Service (Amazon S3) permissions in your AWS account.
You can't transfer Amazon S3 bucket ownership between AWS accounts because the bucket is always owned by the account that created it. Instead, you can copy Amazon S3 objects from one bucket to another so that you give ownership of the copied objects to the destination account.
Amazon S3 Replication now gives you the ability to replicate data from one source bucket to multiple destination buckets. With S3 Replication (multi-destination) you can replicate data in the same AWS Regions using S3 SRR or across different AWS Regions by using S3 CRR, or a combination of both.
Your Custom Resource function needs to send signals back to CloudFormation to indicate completion, status, and any returned values. You will see CREATE_IN_PROGRESS as the status in CloudFormation until you notify it that your function is complete.
The generic way of signaling CloudFormation is to post a response to a pre-signed S3 URL. But there is a cfn-response module to make this easier in Lambda functions. Interestingly, the two examples provided for Lambda-backed Custom Resources use different methods:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With