I have a feeling this is the Rails equivalent of hypochondria... but i took a peek into tail -f logs/development.log
and then became kind of hypnotized by the output:
Delayed::Backend::ActiveRecord::Job Load (0.8ms) UPDATE "delayed_jobs" SET locked_at = '2016-08-26 12:49:09.594888', locked_by = 'host:ghost pid:4564' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2016-08-26 12:49:09.594275' AND (locked_at IS NULL OR locked_at < '2016-08-26 08:49:09.594332') OR locked_by = 'host:ghost pid:4564') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
Delayed::Backend::ActiveRecord::Job Load (0.5ms) UPDATE "delayed_jobs" SET locked_at = '2016-08-26 12:49:14.651262', locked_by = 'host:ghost pid:4564' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2016-08-26 12:49:14.650707' AND (locked_at IS NULL OR locked_at < '2016-08-26 08:49:14.650765') OR locked_by = 'host:ghost pid:4564') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
Delayed::Backend::ActiveRecord::Job Load (0.5ms) UPDATE "delayed_jobs" SET locked_at = '2016-08-26 12:49:19.716179', locked_by = 'host:ghost pid:4564' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2016-08-26 12:49:19.715433' AND (locked_at IS NULL OR locked_at < '2016-08-26 08:49:19.715494') OR locked_by = 'host:ghost pid:4564') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
This runs... every five seconds. So... err, is that... normal?
It's occurred to me that this is the way Delayed Job must work, by checking a job against timestamps and so this is just it doing it's thing, but I've failed to locate decent written evidence to that effect.
If so... my second concern is won't this burn money on my Heroku Instance? I'd installed the workless gem in an attempt to mitigate costs - but I'm not seeing any code come in to shut that down...
Bug or feature, how do i not bankrupt myself?
I'll try my best to answer your questions more completely than the others have done.
Yes, this is normal behavior. When you start the Delayed Job process, it checks your database for unprocessed jobs at a configurable interval. (The default is every 5 seconds, and you can configure it to a different interval using the Delayed::Worker.sleep_delay
setting.)
You are right that during one of its periodic checks, Delayed Job checks for the next available job that needs to be completed and then runs it.
To run Delayed Job efficiently, you usually have a worker dyno on at all times to be constantly checking for new jobs to run.
However, the workless gem that you link to in your question helps you work around that. Instead of needing to have the worker on 24/7 to check for new jobs, workless only starts up a worker dyno when there are jobs to run in the queue. When the jobs are done, workless shuts down the worker dyno.
You can read a description of this behavior in workless's README:
How does Workless work?
Delayed::Workless::Scaler
is mixed into theDelayed::Job
class, which adds a bunch of callbacks to it.- When a job is created on the database, a create callback starts a worker.
- The worker runs the job, which removes it from the database.
- A destroy callback stops the worker.
There is always a tradeoff, however. It takes time for Heroku to start up a worker dyno, so processing of new jobs will not be as instantaneous. For example, with a constantly-running worker dyno, your job will usually run within 5 seconds. If you use workless instead, it'll likely take around 30 seconds for the dyno to be started and for Delayed Job to get to the job. Obviously, what is acceptable depends on your application, so that's completely your decision.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With