Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

determine required permissions for AWS CDK

I'm working with AWS CDK and every time I go to create a new resource (CodePipeline, VPC, etc) I end up in the same loop of...

  • try to deploy
  • "you are not authorized to foo:CreateBar"
  • update IAM permissions
  • try to deploy
  • "you are not authorized to baz:CreateZzz"
  • update IAM permissions

...over and over again. Then the same when I cdk destroy, but for "foo:DeleteFoo"

Is there a more efficient way to determine what permissions a policy needs to perform a certain CDK action? Maybe somewhere in the documentation I can reference?

Thanks

like image 347
Donald P Avatar asked Mar 28 '20 06:03

Donald P


People also ask

What IAM permissions are needed to use CDK deploy?

Previous answer for CDK v1: I'm using below policy to deploy CDK apps. Besides CFN full access and S3 full access to the CDK staging bucket, it grants permission to do everything through CloudFormation. You might want to add some explicit denies for things you don't want to allow.

How do I check IAM role permissions?

To test a policy that is attached to user group, you can launch the IAM policy simulator directly from the IAM console : In the navigation pane, choose User groups. Choose the name of the group that you want to test a policy on, and then choose the Permissions tab. Choose Simulate.

What is bootstrapping in CDK?

cdk bootstrap is a tool in the AWS CDK command-line interface responsible for populating a given environment (that is, a combination of AWS account and region) with resources required by the CDK to perform deployments into that environment.


1 Answers

Here is a script that will execute whatever you pass to it but will also capture the timestamps between what you passed it and when it finished executing and will print all the AWS API Events captured by the configured default aws user using cloudtrail. It can take like 20 minutes for the actions to show up in cloudtrail but the script will check every minute until it gets results for that time range. If no AWS api calls are made during the time range then no results will ever be returned. It's a simple script, there is no max timeout or anything.

#!/bin/bash -x

user_name=`aws sts get-caller-identity | jq -r '.Arn' | sed -e 's/user\// /g' | awk '{print $2}'`
sleep 5 # Sleep to avoid getting the sts call in our time range

start_time=`date`
sleep 1 # Sleep to avoid millisecond rounding issues

eval $@

sleep 1 # Sleep to avoid millisecond rounding issues
end_time=`date`

actions=""
while [ -z "$actions" ]; do
sleep 60
echo "Checking for events from $start_time to $end_time..."
actions=`aws cloudtrail lookup-events --lookup-attributes AttributeKey=Username,AttributeValue=${user_name} --start-time "${start_time}"  --end-time "${end_time}" | jq -r '.Events[].CloudTrailEvent' | jq -s | jq -r '.[] | "\(.eventSource) \(.eventName)"' | sed -e 's/.amazonaws.com /:/g' | sed -e 's/[0-9]//g' | sort | uniq`
done

echo "AWS Actions Used:"
echo "$actions"

I call it get-aws-actions.sh and it requires the aws cli to be installed as well as jq. For cdk I would use it like this

./get-aws-actions.sh "cdk deploy && cdk destroy"

I'd have my admin level credentials configured as the default profile so I know the deployment will not fail because of permission issues then I use the returned results from this script to give permissions to a more specific deployment user/role for long term use. The problem you can run into is the first time you may only see a bunch of :Create* or :Add* actions but really you'll need to add all the lifecycle actions for the ones you see. So if you see dynamodb:CreateTable you'll want to make sure you also add UpdateTable and DeleteTable. If you see s3:PutBucketPolicy you'll also want s3:DeleteBucketPolicy.

To be honest, any services that don't deal with API calls that allow access to data, I will just do <service>:*. An example might be ECS. I can't use ECS API calls to call an API do anything to a container that CloudFormation won't need to do to manage the service. So for that service if I knew I was doing containers I'd just grant ecs:* on * to my deployer role. A service like s3, lambda, sqs, sns where there is data access as well as resource creation access through an API I'll need to be more deliberate with the permissions granted. My deployer role shouldn't have access to read all the data off all buckets or execute functions but it does need to create buckets and functions.

like image 64
Max Schenkelberg Avatar answered Oct 04 '22 14:10

Max Schenkelberg