I've been having some trouble with restarting the SSH daemon with Ansible.
I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64)
tl;dr: There appears to be something wrong with the way I'm invoking the service syntax.
Playbook
- hosts: all - remote_user: vagrant - tasks: ... - name: Forbid SSH root login sudo: yes lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present notify: - restart ssh ... - handlers: - name: restart ssh sudo: yes service: name=ssh state=restarted
Output
NOTIFIED: [restart ssh] failed: [default] => {"failed": true} FATAL: all hosts have already failed -- aborting
The nginx handler completed successfully with nearly identical syntax.
Playbook
- name: Restart SSH server sudo: yes service: name=ssh state=restarted
Same output as the handler use case.
Shell
> ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted"
Inventory
127.0.0.1:8022
Output
127.0.0.1 | FAILED >> { "failed": true, "msg": "" }
When I SSH in and run the usual command, everything works fine.
> vagrant ssh > sudo service ssh restart ssh stop/waiting ssh start/running, process 7899 > echo $? 0
Output
TASK: [Restart SSH server] **************************************************** changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]}
As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.
Use systemctl restart ansible-tower to restart services on clustered environments instead. Also you must restart each cluster node for certain changes to persist as opposed to a single node for a localhost install.
Just run the task service: name=httpd state=started with the option --check . This tells you, if the service needs to be started, which means that it is down. If the task shows no change, it is up already.
If Ansible cannot connect to a host, it marks that host as 'UNREACHABLE' and removes it from the list of active hosts for the run. You can use meta: clear_host_errors to reactivate all hosts, so subsequent tasks can try to reach them again.
How to stop a service. Set the name parameter to the service name and the state parameter to stopped to stop a service. If the service is not running, Ansible will do nothing.
As the comments above state, this is an Ansible issue that will apparently be fixed in the 2.0 release.
I just changed my handler to use the command
module and moved on:
- name: restart sshd command: service ssh restart
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With