Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Run aws_s3 task on remote with environment credentials from executor

I would like to upload a file from remote host to a s3 bucket but with credentials from the local execution environment. Is that possible?

- name: Upload file
   host: '{{target}}'
   gather_facts : False
   tasks:
   - name: copy file to bucket
     become: yes
     aws_s3:
       bucket={{bucket_name}}
       object={{filename}}
       src=/var/log/{{ filename }}
       mode=put

Is there any switch, option I could use?. The best would be something like that:

AWS_PROFILE=MyProfile ansible-playbook upload_file.yml -e target=somehost -e bucket_name=mybucket -e filename=myfile

So I could specify the profile from my own local .aws/config file.

Obviously when running the playbook like this:

ansible-playbook upload_file.yml -e target=somehost -e bucket_name=mybucket -e filename=myfile

I'm getting the following error:

TASK [copy file to bucket] ******************************************************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoCredentialsError: Unable to locate credentials
fatal: [somehost]: FAILED! => {"boto3_version": "1.7.50", "botocore_version": "1.10.50", "changed": false, "msg": "Failed while looking up bucket (during bucket_check) adverity-trash.: Unable to locate credentials"}

But when I try the following:

 AWS_ACCESS_KEY=<OWN_VALID_KEY> AWS_SECRET_KEY=<OWN_VALID_SECRET> ansible-playbook upload_file.yml -e target=somehost -e bucket_name=mybucket -e filename=myfile

It's the same error.

Ansible v2.6

like image 438
Kuba Avatar asked Feb 04 '26 16:02

Kuba


2 Answers

The problem here is : How do I pass environment variables from one host to another. The answer is in hostvars. Feel free to do your own search on hostvars but this'll give a general idea: https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-see-all-the-inventory-vars-defined-for-my-host

Step 1: GATHER the AWS environment credentials from localhost(where you're running ansible from). IMPORTANT: Make sure to set gather_facts to TRUE otherwise the lookup Jinja2 plugin won't find the keys(assuming you've set them up as environment variables in localhost).

- name: Set Credentials
   host: localhost
   gather_facts : true
   tasks:
   - name: Set AWS KEY ID
     set_fact: AWS_ACCESS_KEY_ID="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
   - name: Set AWS SECRET
     set_fact: AWS_SECRET_ACCESS_KEY="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"

Step 2: Import those environment variables from localhost using set_fact and the hostvars Jinja2 plugin.

Step 3: Use the environment variables on {{target}}

Step 2 and 3 are put together below.

- name: Upload file
   host: '{{target}}'
   gather_facts : False
   tasks:
   - name: Get AWS KEY ID
     set_fact: aws_key_id={{hostvars['localhost']['AWS_ACCESS_KEY_ID']}}
   - name: Get AWS SECRET KEY
     set_fact: aws_secret_key={{hostvars['localhost']['AWS_SECRET_ACCESS_KEY']}}
   - name: copy file to bucket
     become: yes
     aws_s3:
       bucket={{bucket_name}}
       object={{filename}}
       src=/var/log/{{ filename }}
       mode=put
       aws_access_key='{{aws_key_id}}'
       aws_secret_key='{{aws_secret_key}}'
like image 85
eco Avatar answered Feb 06 '26 06:02

eco


He're a satisfying solution to my problem.

With help of @einarc and the ansible hostvars I was able to achieve a remote upload capability with credentials comming from local environment The facts gathering was not necessary and I used delegate_to to do some tasks locally. Everything is in one playbook

- name: Transfer file
  hosts: '{{ target }}'
  gather_facts : False
  tasks:
  - name: Set AWS KEY ID
    set_fact: aws_key_id="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
    delegate_to: 127.0.0.1
  - name: Set AWS SECRET
    set_fact: aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
    delegate_to: 127.0.0.1
  - name: Get AWS KEY ID
    set_fact: aws_key_id={{hostvars[inventory_hostname]['aws_key_id']}}
  - name: Get AWS SECRET KEY
    set_fact: aws_secret_key={{hostvars[inventory_hostname]['aws_secret_key']}}
  - name: ensure boto is available
    become: true
    pip: name=boto3 state=present
  - name: copy file to bucket
    become: yes
    aws_s3:
      aws_access_key={{aws_key_id}}
      aws_secret_key={{aws_secret_key}}
      bucket=my-bucket
      object={{filename}}
      src=/some/path/{{filename}}
      mode=put

Bonus: I found a way to not explicitly put the aws credentials in command line.

I've used the following bash wrapper to get the credentials from config file with the help of aws-cli.

#!/bin/bash
AWS_ACCESS_KEY_ID=`aws configure get aws_access_key_id --profile $1`
AWS_SECRET_ACCESS_KEY=`aws configure get aws_secret_access_key --profile $1`

AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \ 
AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
ansible-playbook transfer_to_s3.yml -e target=$2 -e filename=$3
like image 26
Kuba Avatar answered Feb 06 '26 05:02

Kuba



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!