Basically, I need to configure CI/CD with bitbucket source code to ECS containers. I want to use CodePipline to deploy new ECR image to ECS.
Currently, there is no option in AWS CodePipline to specify bitbucket as the source. However, I've managed to configure CodeBuild with webhooks so it builds docker file and pushes it to ECR on every push to release branch.
I want to configure ECR as the "source" stage in CodePipline and deploy it to existing ECS cluster/service so deploy will be automated.
Unfortunately, basic configuration and artifact chaining results if following error in the deploy step:
Invalid action configuration
The image definition file imageDetail.json contains invalid JSON format
Though "Amazon ECR" stage provides imageDetail.json as an output artifact, it does not seem to be expected for "Amazon ECS" deploy provider. Is there any rational way to get around this issue?
I'm aware, that it is possible to configure CI/CD with bitbucket + API Gateway/Lambda + CodePipeline, I also consider using CodeCommit instead of bitbucket as the source repo - still, hope there is a possible elegant solution to use bitbucket with CodePipeline directly.
UPD: I've ended up with pretty nice configuration, described in this blogpost: the overall idea is to allow CodeBuild to upload source code from bitbucket to S3 and then use CodePipeline with S3 as a source to deploy new docker image to ECR and publish new task definition revision in ECS cluster. S3 is still overhead and I'm searching for a more elegant solution for the task.
This step is optional if you have already created a build stage. On the Step 4: Add deploy stage page, do one of the following, and then choose Next: Choose Skip deploy stage if you created a build stage in the previous step. This option does not appear if you have already skipped the build stage.
I just recently had to solve a similar issue where I wanted to use ECR as the source of my pipeline and have it deploy the image to ECS. The solution I found was by creating 3 stages:
Here's the buildspec.yml file I'm using as my build stage:
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
build:
commands:
- PHP_REPOSITORY_URI=$(cat imageDetail.json | python -c "import sys, json; print(json.load(sys.stdin)['ImageURI'].split('@')[0])")
- IMAGE_TAG=$(cat imageDetail.json | python -c "import sys, json; print(json.load(sys.stdin)['ImageTags'][0])")
- echo $PHP_REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Writing image definitions file...
- printf '[{"name":"container","imageUri":"%s"}]' $PHP_REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
Basically what this does is read the imageDetail.json file and extract the ECR repository URL and TAG and output a json file formatted for the ECS Deploy stage, which is just a standard stage without customization.
I had a similar use-case and hit the same problem. Bit of a long answer with a solution that addresses my use case...
As per this official doco from AWS ECS Standard Deployment expects a - imagedefinitions.json file which provides the container name and image URI. It should looks like:
[
{
"name": "sample-app",
"imageUri": "11111EXAMPLE.dkr.ecr.us-west-2.amazonaws.com/ecs-repo:latest"
}
]
But the ECR source produces an Output Artifact called imageDetail.json example below. This does not match the expected input format for ECS Standard Deploy aka imagedefinitions.json - which includes the Container Name (name) and the deploy fails with a message like Deploy Failure:
{
"ImageSizeInBytes": "44728918",
"ImageDigest": "sha256:EXAMPLE11223344556677889900bfea42ea2d3b8a1ee8329ba7e68694950afd3",
"Version": "1.0",
"ImagePushedAt": "Mon Jan 21 20:04:00 UTC 2019",
"RegistryId": "EXAMPLE12233",
"RepositoryName": "dk-image-repo",
"ImageURI": "ACCOUNTID.dkr.ecr.us-west-2.amazonaws.com/dk-image-repo@sha256:example3",
"ImageTags": [
"latest"
]
}
The approach I took to fix this is:
In the Source stage: In addition to ECR source I added a source from s3 which contains imagedefinitions.json in a zip.
In the ECS Deploy stage action I refer to the Output artifact from the s3 source which contains imagedefinitions.json in the format that ECS Standard Deploy understands.
Note: The imagedefinitions.json is static in the s3 bucket and always refers to latest tag on the said image. So in the QA image definitions bucket I will end-up with an image definitions zip i.e one per instance of Fargate service.
I've exported my pipeline here for general reference:
{
"pipeline": {
"roleArn": "arn:aws:iam::ACCOUNTID:role/service-role/AWSCodePipelineServiceRole-REGION-PIPELINENAME",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"region": "REGION",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"version": "1",
"provider": "ECR"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"configuration": {
"ImageTag": "latest",
"RepositoryName": "PIPELINENAME"
},
"runOrder": 1
},
{
"inputArtifacts": [],
"name": "sourceimagedeffile",
"region": "REGION",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"version": "1",
"provider": "S3"
},
"outputArtifacts": [
{
"name": "PIPELINENAME-imagedefjson"
}
],
"configuration": {
"S3Bucket": "BUCKETNAME",
"PollForSourceChanges": "true",
"S3ObjectKey": "PIPELINENAME.zip"
},
"runOrder": 1
}
]
},
{
"name": "Deploy",
"actions": [
{
"inputArtifacts": [
{
"name": "PIPELINENAME-imagedefjson"
}
],
"name": "Deploy",
"region": "REGION",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "ECS"
},
"outputArtifacts": [],
"configuration": {
"ClusterName": "FARGATECLUSTERNAME",
"ServiceName": "PIPELINENAME",
"FileName": "imageDetail.json"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "codepipeline-REGION-555869339681"
},
"name": "PIPELINENAME"
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With