I have created a script to start/stop my application. Now I want to add it as a centos system service. First I created a task to create a link from my script to /etc/init.d/service_name as below.
--- - name: create startup link file: src={{ cooltoo_service_script }} dest={{ cooltoo_service_init }} state=link
After create the service, I want to add it to system service. The command used to do that is "chkconfig --add service_name". I wonder whether there is a ansible module to do that instead of hardcoded the command in ansible-playbook file. I have looked at this page http://docs.ansible.com/ansible/service_module.html but it only shows how to manage a service not create a new one.
Use systemctl restart ansible-tower to restart services on clustered environments instead. Also you must restart each cluster node for certain changes to persist as opposed to a single node for a localhost install. For more information on clustered environments, see the Clustering section.
Ansible SystemD module helps to control the systemd units such as services and timers created on the Linux server. SystemD is a daemon that manages the services and timer units on the Linux system and we mostly interact with it using the following ways. service file. systemctl command. journalctl command.
The below code snippet will create Service in CentOS 7.
/tasks/main.yml
- name: TeamCity | Create environment file template: src=teamcity.env.j2 dest=/etc/sysconfig/teamcity - name: TeamCity | Create Unit file template: src=teamcity.service.j2 dest=/lib/systemd/system/teamcity.service mode=644 notify: - reload systemctl - name: TeamCity | Start teamcity service: name=teamcity.service state=started enabled=yes
/templates/teamcity.service.j2
[Unit] Description=JetBrains TeamCity Requires=network.target After=syslog.target network.target [Service] Type=forking EnvironmentFile=/etc/sysconfig/teamcity ExecStart={{teamcity.installation_path}}/bin/teamcity-server.sh start ExecStop={{teamcity.installation_path}}/bin/teamcity-server.sh stop User=teamcity PIDFile={{teamcity.installation_path}}/teamcity.pid Environment="TEAMCITY_PID_FILE_PATH={{teamcity.installation_path}}/teamcity.pid" [Install] WantedBy=multi-user.target
\templates\teamcity.env.j2
TEAMCITY_DATA_PATH="{{ teamcity.data_path }}"
\handlers\main.yml
- name: reload systemctl command: systemctl daemon-reload
The 'service' module supports an 'enabled' argument.
Here's an example part of a playbook, which I will freely admit does look like a newbie attempt. This assumes RHEL/CentOS 6.x, which uses SysV, not systemd.
- name: install rhel sysv supervisord init script copy: src=etc/rc.d/init.d/supervisord dest=/etc/rc.d/init.d/supervisord owner=root group=root mode=0755 - name: install rhel sysv supervisord sysconfig copy: src=etc/sysconfig/supervisord dest=/etc/sysconfig/supervisord owner=root group=root mode=0640 - name: enable sysv supervisord service service: name=supervisord enabled=yes - name: start supervisord service: name=supervisord state=started
IMPORTANT A lot of custom init scripts WILL FAIL with Ansible and SysV init; the reason being that the 'status' option (service supervisord status) needs to a return an LSB-compliant return code. Otherwise, Ansible will not know if a service is up or down, and idempotency will fail (restart will still work because that is unconditional)
Here's part of a script, which I've just rewritten to make use of the 'status' function within /etc/init.d/functions (you'll notice this same pattern in other Red Hat provided init-scripts in /etc/init.d/
status) /usr/bin/supervisorctl $OPTIONS status status -p $PIDFILE supervisord # The 'status' option should return one of the LSB-defined return-codes, # in particular, return-code 3 should mean that the service is not # currently running. This is particularly important for Ansible's 'service' # module, as without this behaviour it won't know if a service is up or down. RETVAL=$? ;;
Reference: http://refspecs.linuxfoundation.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
If the status action is requested, the init script will return the following exit status codes.
0 program is running or service is OK 1 program is dead and /var/run pid file exists 2 program is dead and /var/lock lock file exists 3 program is not running 4 program or service status is unknown 5-99 reserved for future LSB use 100-149 reserved for distribution use 150-199 reserved for application use 200-254 reserved
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With