I run multiple CoreOS instances on Google Compute Engine (GCE). CoreOS uses systemd's journal logging feature. How can I push all logs to a remote destination? As I understand, systemd journal doesn't come with remote logging abilities. My current work-around looks like this:
journalctl -o short -f | ncat <addr> <ip>
With https://logentries.com using their Token-based input via TCP:
journalctl -o short -f | awk '{ print "<token>", $0; fflush(); }' | ncat data.logentries.com 10000
Are there better ways?
EDIT: https://medium.com/coreos-linux-for-massive-server-deployments/defb984185c5
A downside to using -o short
is that the format is hard to parse; short-iso
is better. If you're using an ELK stack, exporting as JSON is even better. A systemd service like the following will ship JSON-formatted logs to a remote host quite well.
[Unit]
Description=Send Journalctl to Syslog
[Service]
TimeoutStartSec=0
ExecStart=/bin/sh -c '/usr/bin/journalctl -o json -f | /usr/bin/ncat syslog 515'
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
On the far side, logstash.conf
for me includes:
input {
tcp {
port => 1515
codec => json_lines
type => "systemd"
}
}
filter {
if [type] == "systemd" {
mutate { rename => [ "MESSAGE", "message" ] }
mutate { rename => [ "_SYSTEMD_UNIT", "program" ] }
}
}
This results in the whole journalctl data structure being available to Kibana/Elasticsearch.
systemd past version 216 includes remote logging capabilities, via an client / server process pair.
http://www.freedesktop.org/software/systemd/man/systemd-journal-remote.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With