I have a job processing architecture based on AWS that requires EC2 instances query S3 and SQS. In order for running instances to have access to the API the credentials are sent as user data (-f) in the form of a base64 encoded shell script. For example:
$ cat ec2.sh ... export AWS_ACCOUNT_NUMBER='1111-1111-1111' export AWS_ACCESS_KEY_ID='0x0x0x0x0x0x0x0x0x0' ... $ zip -P 'secret-password' ec2.sh $ openssl enc -base64 -in ec2.zip
Many instances are launched...
$ ec2run ami-a83fabc0 -n 20 -f ec2.zip
Each instance decodes and decrypts ec2.zip using the 'secret-password' which is hard-coded into an init script. Although it does work, I have two issues with my approach.
The method is very similar to the one described here
Is there a more elegant or accepted approach? Using gpg to encrypt the credentials and storing the private key on the instance to decrypt it is an approach I'm considering now but I'm unaware of any caveats. Can I use the AWS keypairs directly? Am I missing some super obvious part of the API?
You can use IAM to control how other users use resources in your AWS account, and you can use security groups to control access to your Amazon EC2 instances. You can choose to allow full use or limited use of your Amazon EC2 resources.
Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/ . In the navigation pane, choose Users. Choose the name of the user whose access keys you want to create, and then choose the Security credentials tab. In the Access keys section, choose Create access key.
AWS support many ways to let you connect to your servers(EC2), we will introduce three methods : SSH, Instance Connect, System Manager and deep dive in EC2 Instance Connect and System Manager – Session Manager.
Ensure that no security group allows unrestricted inbound access on TCP port 6379 (Redis). Ensure that your EC2 instances do not reach the limit set by AWS for the number of vCPUs. Ensure default security groups restrict all public traffic to follow AWS security best practices.
You can store the credentials on the machine (or transfer, use, then remove them.)
You can transfer the credentials over a secure channel (e.g. using scp
with non-interactive authentication e.g. key pair) so that you would not need to perform any custom encryption (only make sure that permissions are properly set to 0400
on the key file at all times, e.g. set the permissions on the master files and use scp -p
)
If the above does not answer your question, please provide more specific details re. what your setup is and what you are trying to achieve. Are EC2 actions to be initiated on multiple nodes from a central location? Is SSH available between the multiple nodes and the central location? Etc.
EDIT
Have you considered parameterizing your AMI, requiring those who instantiate your AMI to first populate the user data (ec2-run-instances -f user-data-file
) with their AWS keys? Your AMI can then dynamically retrieve these per-instance parameters from http://169.254.169.254/1.0/user-data
.
UPDATE
OK, here goes a security-minded comparison of the various approaches discussed so far:
user-data
unencrypted telnet
, curl
, wget
, etc. (can access clear-text http://169.254.169.254/1.0/user-data
)http://169.254.169.254/1.0/user-data
)user-data
and encrypted (or decryptable) with easily obtainable key telnet
, curl
, wget
, etc. (can access clear-text http://169.254.169.254/1.0/user-data
)http://169.254.169.254/1.0/user-data
, ulteriorly descrypted with the easily-obtainable key)user-data
and encrypted with not easily obtainable key telnet
, curl
, wget
, etc. (can access encrypted http://169.254.169.254/1.0/user-data
) root
for interactive impersonation) improves securitySo any method involving the AMI user-data
is not the most secure, because gaining access to any user on the machine (weakest point) compromises the data.
This could be mitigated if the S3 credentials were only required for a limited period of time (i.e. during the deployment process only), if AWS allowed you to overwrite or remove the contents of user-data
when done with it (but this does not appear to be the case.) An alternative would be the creation of temporary S3 credentials for the duration of the deployment process, if possible (compromising these credentials, from user-data
, after the deployment process is completed and the credentials have been invalidated with AWS, no longer poses a security threat.)
If the above is not applicable (e.g. S3 credentials needed by deployed nodes indefinitely) or not possible (e.g. cannot issue temporary S3 credentials for deployment only) then the best method remains to bite the bullet and scp
the credentials to the various nodes, possibly in parallel, with the correct ownership and permissions.
I wrote an article examining various methods of passing secrets to an EC2 instance securely and the pros & cons of each.
http://www.shlomoswidler.com/2009/08/how-to-keep-your-aws-credentials-on-ec2/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With