Is there any way to configure supervisor to run some command every X seconds(like CRON)?
I see example with eventlistener and TICK_ event
[eventlistener:memmon]
command=memmon -a 200MB -m [email protected]
events=TICK_60
But it runs the command only once.
As you are seeing in the memmon example, supervisord is not executing memmon -a 200MB -m [email protected]
at each event. Rather, it is starting this event listener once (or potentially a few times if you configure a pool) and then sends each new event over standard input of an existing process.
Consequently, you really need to find or write a supervisor compatible event listener for each additional type of capability you want to trigger on events.
write a supervisord.cfg event section
[eventlistener:passthru]
command=/tmp/simple.py /bin/date -u +"%%s %%S:%%H:%%d:%%m"
events=TICK_60
(note- the escaping of % for configParser )
write a simple.py Event Listener
Create this simple.py listener by making changes to the example listener from the docs so it executes its first argument with any remaining arguments:
#! /usr/bin/python
import sys
import subprocess
def write_stdout(s):
sys.stdout.write(s)
sys.stdout.flush()
def write_stderr(s):
sys.stderr.write(s)
sys.stderr.flush()
def main(args):
while 1:
write_stdout('READY\n') # transition from ACKNOWLEDGED to READY
line = sys.stdin.readline() # read header line from stdin
write_stderr(line) # print it out to stderr
headers = dict([ x.split(':') for x in line.split() ])
data = sys.stdin.read(int(headers['len'])) # read the event payload
res = subprocess.call(args, stdout=sys.stderr); # don't mess with real stdout
write_stderr(data)
write_stdout('RESULT 2\nOK') # transition from READY to ACKNOWLEDGED
if __name__ == '__main__':
main(sys.argv[1:])
import sys
$ supervisorctl [-c cfg]
supervisor> status
passthru RUNNING pid 4471, uptime 0:00:32
supervisor> tail passthru
OKREADY
RESULT 2
OKREADY
...
supervisor> tail passthru stderr
supervisor> tail passthru stderr
ver:3.0 server:supervisor serial:0 pool:passthru poolserial:0 eventname:TICK_60 len:15
1451411161 01:17:29:12 <--- output
when:1451411160ver:3.0 server:supervisor serial:1 pool:passthru poolserial:1 eventname:TICK_60 len:15
1451411220 00:17:29:12 <--- output
when:1451411220
Now date -u +"%s %S:%H:%d:%m"
is running every 60 seconds.
create an executable script
/tmp/hiworld.php:
#! /usr/bin/php
<?= "hiya\n";
(chmod +x ...)
change the listener's arguments in supervisord.cfg
[eventlistener:passthru]
command=/tmp/simple.py /tmp/hiworld.php
;stdout_logfile=/tmp/passthru
events=TICK_60
;autorestart=true
;startsecs=0
reload supervisord and test (reread seems not to detect this change)
supervisor> reload
Really restart the remote supervisord process y/N? y
Restarted supervisord
supervisor> status
passthru RUNNING pid 6017, uptime 0:00:10
supervisor> tail passthru stderr
supervisor> status
passthru RUNNING pid 6017, uptime 0:00:21
supervisor> status
passthru RUNNING pid 6017, uptime 0:01:01
supervisor> tail passthru stderr
ver:3.0 server:supervisor serial:316 pool:passthru poolserial:0 eventname:TICK_60 len:15
hiya
when:1418926740
supervisor>
Now the desired command is running every 60 seconds. You are now read to adjust particulars of permissions, locations, logs, etc.
Why invent the wheel? You can use cron
and supervisord
together.
In supervisord, create a task with autostart=false
In cron, use * * * * * supervisorctl start <taskname>
to start the task every minute
Supervisor does not support this easily.
But to achieve your goal you can just use supervisor to start cron (for a docker container for instance):
https://gist.github.com/martinrusev/7015e393d46647dbad15
Install cron
in your docker (apt-get install cron
with a debian-like docker)
In supervisor config:
[program:cron]
command=cron -f -L 15
autostart=true
autorestart=true
-f
is for foreground, -L 15
is to have all cron logs output.
Then use a user crontab, the global /etc/crontab
or any crontab special directories (/etc/cron.hourly
, /etc/cron.daily
, …)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With