I have several bolts deployed to a topology on a cluster. Each is configured to log via slf4j. On the test machine I get both the stdout and the file appenders working fine.
When I deploy this to the cluster the logging seems to have disappeared. I don't get anything in the storm logs (on the supervisor machines), to /var/log/* or anywhere else as far as I can tell.
Should I be able to use a logging system inside a storm worker? If so, is there a trick to getting the messages?
Machines are all running CentOS 6.6 x64
Location of the Logs All the daemon logs are placed under ${storm. log. dir} directory, which an administrator can set in the System properties or in the cluster configuration.
Spout emits the data to one or more bolts. Bolt represents a node in the topology having the smallest processing logic and the output of a bolt can be emitted into another bolt as input. Storm keeps the topology always running, until you kill the topology.
There are just three abstractions in Apache Storm: spouts, bolts, and topologies. A spout is a source of streams in a computation. Typically a spout reads from a queueing broker such as Kestrel, RabbitMQ, or Kafka, but a spout can also generate its own stream or read from somewhere like the Twitter streaming API.
This blog post suggest a method to find location of log files on storm cluster. http://www.saurabhsaxena.net/how-to-find-storm-worker-log-directory/
When topology is deployed on cluster, topology logs are written in worker*.log files.
As mentioned in the blog post
In my case (using the official storm docker image) logs were in the supervisor container:
/logs/workers-artifacts/MyTopology-1-123123123/123/worker.log
You can set storm.workers.artifacts.dir
parameter in storm.yaml and from now on, workers artifacts (including logs) will be saved in that path in a folder which is named after your topology name.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With