I need some advice on how to restart all airflow services on deploy without killing the workers in the middle of a task.
I've written a deployment procedure for my DAGs which installs airflow and any other pip dependencies in a virtualenv. Once my release directory is ready, I:
The problem with this deploy procedure is that the workers get killed immediately. I'd like to add some sort of monitoring to the script to pause all DAGs, wait for the workers to idle, then restart the services, but the airflow CLI has no way to learn which dags are enabled nor whether the workers are idle.
I understand that many of the airflow services can auto-detect changes in the dags folder, but I want each deployment to have its own virtualenv. If I don't restart all services then a new deployment won't pick up a new line in my requirements.txt file.
You have access to the Airflow DB so consider developing a deployment script that does this process for you.
Airflow workers gracefully quit from a SIGINT. Update your process monitor to quit with SIGINT instead of the default. If you're using systemctl, then it will look something like this:
...
[Service]
EnvironmentFile=/etc/sysconfig/airflow
User=airflow
Group=airflow
Type=simple
ExecStart=...
KillSignal=SIGINT
Restart=on-failure
RestartSec=10s
...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With