I'm using the gcp_compute_instance
ansible module to create an instance, basically reproducing what's in this document. This works fine to create the instance but after the instance is created, I can't configure it with a different playbook because the instance doesn't recognize the service account as a valid ssh user. The error I get when I run the second playbook is
Failed to connect to the host via ssh: [email protected]: Permission denied (publickey).
I haven't found in any of the documents or tutorials on gce/ansible how to configure the ssh access to the newly created instance. All the documents kind of imply that it should just work with what you configured to actually create the instance, which is clearly not the case. What I tried was give the service account a role that has the permission compute.instances.osAdminLogin
but there are still two pieces of information that I don't know.
remote_user
set to the name of the service account but this is not working. How do I automatically configure this in ansible so that i can run both playbooks back to back without manual intervention?gcloud compute os-login ssh-keys add
adds it to my personal account, not to the service accountNote that I can ssh into the instance using my personal account just fine, which uses the OS Login feature of GCE and my personal ssh key, but i want to run the second ansible playbook using a service account with its own ssh key, not my personal account so that I can share the whole process with other people or with a CI/CD service
Ansible and Google Native IntegrationThe Ansible/GCP integration gives you everything you need to manage your IT infrastructure. From provisioning instances and autoscaling, custom networks and load balancing, and even managing DNS and cloud storage, it's all provided.
Ansible + Google have been working together on a set of auto-generated Ansible modules designed to consistently and comprehensively cover the entirety of the Google Cloud Platform (GCP).
the scripts in the document the op is following are broken. I have just spent about a day chasing this and the problem is that the instance ends up connected to the created network rather than the project default. It then doesn't matter about os_login or keys or firewall rules you just can't connect via either the cloud console or using SSH remotely. As a first step to remove this problem:
- name: create a instance
gcp_compute_instance:
state: present
name: "{{ system_name }}"
machine_type: "{{ system_type }}"
disks:
- auto_delete: true
boot: true
source: "{{ disk }}"
network_interfaces:
- access_configs:
- name: 'External NAT'
nat_ip: "{{ address }}"
type: 'ONE_TO_ONE_NAT'
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- "{{ gcp_scopes }}"
register: instance
Of course once the vm is working and allowing ssh access. you can go back and assess whether the extra network is required and how to use it.
According to the document you are mentioning, you can use either Service account with the roles required to perform the operation or an account associated to the machine (GCE) which also is a service account.
You can achieve this by creating a service account. and generating a JSON key for this.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With