Currently I have slow starting java service in systemd which takes about 60 seconds until it opens its HTTP port and serves other clients.
Another client service expects this service to be available (is a client of the this service), otherwise it dies after a certain retry. It also started with systemd. This is to be clear also a service. But uses the former like database.
Can I configure systemd to wait until the first service has made his socket available? (something like if the socket is actually listens , then the second client service should start).
Delay the start of the systemd service at boot. Inspect service that you want to delay at boot. redis-server. service - Advanced key-value store Loaded: loaded (/lib/systemd/system/redis-server.
In contrast to SysVInit, SystemD continues to run as a daemon process after the initialization is completed. Additionally, they are also actively tracking the services through their cgroups. The systemctl command is the entry point for users to interact and configures the SystemD.
TimeoutStartSec= Configures the time to wait for start-up. If a daemon service does not signal start-up completion within the configured time, the service will be considered failed and will be shut down again.
If the application starts successfully and opens the port within 30 seconds, systemd updates the status to "active (running)".
So for example, to wait for 1 minute after boot-up before starting your foo.service, create a foo.timer file in the same directory with the contents: It is important that the service is disabled (so it doesn't start at boot), and the timer enabled, for all this to work (thanks to user tride for this):
One way to do that would be with an ExecStartPost option that busy-loop waits for the OS to indicate that the port is now being listened to. This gives systemd one command to execute after ExecStart.
systemd flags the activemq service as "active" before activemq finishes opening all ports. [Service A] relies on activemq and its ports to be listening (specially port 61616). ... you may way to modify systemd's understanding of the service's status beyond "process is running" to "process is running and that port is open".
systemd waits for a daemon to initialize itself if the daemon forks. In your situation, that's pretty much the only way you have to do this.
The daemon offering the HTTP service must do all of its initialization in the main thread, once that initialization is done and the socket is listening for connections, it will fork()
. The main process then exits. At that point systemd knows that your process was successfully (exit 0) or not (exit 1) initialized.
Such a service receives the Type=... value of forking as follow:
[Service]
Type=forking
...
The other services have to wait so they have to require the first to be started. Say your first service is called A, you would have a Requires like this:
[Unit]
...
Requires=A
...
Of course, there is always another way which is for the other services to know to be patient. That means try to connect to the HTTP port, if it fails, sleep for a bit (in your case, 1 or 2 seconds would be just fine) then try again, until it works.
I have developed both methods and they both work very well.
Note: A powerful aspect to this method, if service A gets restarted, you'd get a new socket. This server can then auto-reconnect to the new socket when it detects that the old one goes down. This means you don't have to restart the other services when restarting service A. I like this method, but it's a bit more work to make sure it's all properly implemented.
Another way, maybe, would be to use the restart on failure. So if the child attempts to connect to that HTTP service and fails, it should fail, right? systemd can automatically restart your process over and over again until it succeeds. It's sucky, but if you have no control over the code of those daemons, it's probably the easiest way.
[Service]
...
Restart=on-failure
RestartSec=10
#SuccessExitStatus=3 7 # if success is not always just 0
...
This example waits 10 seconds after a failure before attempting to restart.
You could attempt a hack, although I do not ever recommend such things because something could happen that breaks such... in the services, change the files so that they have a sleep 60 then start the main process. For that, just write a script like so:
#!/bin/sh
sleep 60
"$@"
Then in the .service files, call that script as in:
ExecStart=/path/to/script /path/to/service args to service
This will run the script instead of directly your code. The script will first sleep for 60 seconds and then try to run your service. So if for some reason this time the HTTP service takes 90 seconds... it will still fail.
Still, this can be useful to know since that script could do all sorts of things, such as use the nc
tool to probe the port before actually starting the service process. You could even write your own probing tool.
#!/bin/sh
while true
do
sleep 1
if probe
then
break
fi
done
"$@"
However, notice that such a loop is blocking until probe
returns with exit code 0.
You have several options here.
The most elegant solution is to let systemd manage the socket for you. If you control the source code of the Java service, change it to use System.inheritedChannel()
instead of allocating its own socket, and then use systemd units like this:
# example.socket
[Socket]
ListenStream=%t/example
[Install]
WantedBy=sockets.target
# example.service
[Service]
ExecStart=/usr/bin/java ...
StandardInput=socket
StandardOutput=socket
StandardError=journal
systemd will create the socket immediately (%t
is the runtime directory, so in a system unit, the socket will be /run/example
), and start the service as soon as the first connection attempt is made. (If you want the service to be started unconditionally, add an Install
section to it as well, with WantedBy=multi-user.target
.) When your client program connects to the socket, it will be queued by the kernel and block until the server is ready to accept connections on the socket. One additional benefit from this is that you can restart the service without any downtime on the socket – connection attempts will be queued until the restarted service is ready to accept connections again.
Alternatively, you can set up the service so that it signals to systemd when it is ready, and order the client after it. (Note that this requires After=example.service
, not just Requires=example.service
! Dependencies and ordering are orthogonal – without After=
, both will be started in parallel.) There are two main service types that might make this possible:
Type=forking
: systemd will consider the service to be ready as soon as the main program exits. Since you can’t fork
in Java, I think you would have to write a small shell script which starts the server in the background and then waits until the socket is available (while ! test -S /run/example; do sleep 1s; done
). Once the script exits, the service is considered ready.
Type=notify
: systemd will wait for a message from the service before it is considered ready. Ideally, the message should be sent from the service PID itself: check if you can call the sd_notify
function from libsystemd via JNI/JNA/whatever (specifically, sd_notify(0, "READY=1")
). If that’s not possible, you can use the systemd-notify
command-line tool (--ready
option), but then you need to set NotifyAccess=all
in the service unit (by default, only the main process may send notifications), and even then it likely will not work (systemd needs to process the message before systemd-notify
exits, otherwise it will not be able to verify which cgroup the message came from).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With