Is there a parameter I can pass to delayed_job that will prevent it from deleting completed jobs from the delayed_jobs table?
kind of like the destroy_failed_jobs
but for completed jobs
any ideas?
Delayed Job uses your database as a queue to process background jobs. If your app has a high database load using DelayedJob may not be a good background queueing library for your app. To get started using Delayed Job you need to configure your application and then run a worker process in your app.
The most simple way to check whether delayed_job is running or not, is to check at locked_by field. This field will contain the worker or process locking/processing the job. Running Delayed::Job. where('locked_by is not null') will give you some results, if there are jobs running.
Restart Delayed Job on deploy You must remove that one now if you have it. It basically does the same thing that we will add now but without using upstart. We will now create a new file that will host our start, stop and restart tasks. Create a file at lib/capistrano/tasks/delayed_job.
Expanding on @jefflunt's answer.
I added a migration to create a table to contain the completed jobs
class CreateCompletedJobs < ActiveRecord::Migration
def change
create_table :completed_jobs do |t|
t.integer "priority", :default => 0
t.integer "attempts", :default => 0
t.text "handler", :limit => 2147483647
t.datetime "run_at"
t.datetime "completed_at"
t.string "queue"
t.timestamps
end
end
end
Then a module
class CompletedJob < ActiveRecord::Base
end
Finally added the hook to the job I want to store
def success job
save_completed_job job
end
private
def save_completed_job job
CompletedJob.create({
priority: job.priority,
attempts: job.attempts,
handler: job.handler,
run_at: job.run_at,
completed_at: DateTime.now,
queue: job.queue
})
end
Since I have more then one job I placed the success method hook in a module and included it in all the jobs I would like to store. (Note: some aren't worth storing)
It doesn't appear so. From the README: https://github.com/tobi/delayed_job
By default, it will delete failed jobs (and it always deletes successful jobs). If you want to keep failed jobs, set Delayed::Job.destroy_failed_jobs = false. The failed jobs will be marked with non-null failed_at.
You would probably need to hook into its destroy method such that it copies the job to another, separate table that keeps the list of completed jobs for you, or to simply log which jobs have been done to a file, if a log is all you need.
What you don't want is to leave the jobs in the delayed_jobs
table, for a couple of reasons. First, because delayed_jobs
uses that table as it's TODO list - you want it to only be things that still need to be done. Second, if you hacked it to keep all jobs in the same table, then the delayed_jobs
table only grows, which would slow down the processing of delayed_jobs
over time, as the query to find jobs that have not yet been completed would have to filter out those that have.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With