I've asked a different question about directory watching, which was answered, but the other half of the question is how to best create a never ending process, in ruby, to do this. Here are the requirements:
We've looked at BackgroundRb, but that seems a bit outdated and to be honest unreliable! We've looked at DelayedJob, but that seems geared for one off jobs (because a never-ending job seems to block any other job from getting done as jobs are done sequentially).
We are running a bunch of Ubuntu servers that form our environment.
Any ideas?
I have an event machine loop tailing some nginx log files and putting them into MongoDB. The "log eater" scripts are running with ruby daemons. http://daemons.rubyforge.org/
I have found it to be much more reliable than god. This also monitors and restarts your script if it dies. If you want notification if the runner dies, you can use monit to do that.
Here is my runner script for daemons:
#!/usr/bin/env ruby
require 'rubygems'
require 'bundler'
Bundler.require(:default)
Bundler.setup(:default)
options = {
:app_name => "log_eater",
:dir_mode => :system,
:multiple => true,
:backtrace => true,
:monitor => true
}
Daemons.run(File.join(File.dirname(__FILE__), 'log_eater.rb'), options)
This has been running for many months with no leak or no problem. God had problems with leaks and dying. Capistrano can restart this by restarting your startup script.
Here is an excerpt from mine for gentoo linux
start() {
ebegin "Starting log-eater"
cd /ruby/STABLE/quickanalytics
`scripts/log_eater_runner.rb start -- /usr/logs/nginx.log`
eend $? "Failed to start log-eater"
}
-- after the start command is for any args you want passed to your script.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With