Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I "discard" new task for being queue if a command is already processing?

I've Hangfire setup on a Windows Service, which I start basically with _server = new BackgroundJobServer();

I've somethings like 30 commands configured, each with different Command/Parameters (which is a basic setup I believe).

Each task must be continuously executed every 1 Minute. So I've setup columns Cron and Frequency respectively as */1 * * * * and Every 1 minutes.

The problem is this: if I queue the 30 commands, and some of them takes more than 1 minute (i.e. 10 minutes due to heavy process to be executed), it continuously queue every task (also the ones which is already processing), resulting in a blocking/infinitely queue for the task which can't de-queue in 1 minute.

Example of method (i.e. Command) I have:

public void CMS1Integration(string systemCode)
public void CMS2Integration(string systemCode)
public void CMS3Integration(string systemCode)

This way, method CMS1Integration can run in parallel, but a single one for each systemCode I pass. So CMS1Integration("cms1Name1") + CMS1Integration("cms1Name2") can run in parallel, but not CMS1Integration("cms1Name1") + CMS1Integration("cms1Name1") (because it have the same systemCode).

How can I resolve it avoid the re-queue of already processing task?

Note: they are queued as recurring-jobs:

...
RecurringJob.AddOrUpdate(hangFireCmd.Name, GetAction(hangFireCmd), hangFireCmd.Cron, TimeZoneInfo.Local);
...

Looking in the Set table, I find than this list (as example):

Id  Key Score   Value   ExpireAt
24279   recurring-jobs  1717657740  cms1Name1   NULL
24280   recurring-jobs  1717657740  cms1Name2   NULL
24281   recurring-jobs  1717657740  cms1Name3   NULL
24282   recurring-jobs  1717657740  cms2Name1   NULL
24283   recurring-jobs  1717657740  cms2Name2   NULL
24284   recurring-jobs  1717711800  cms3Name1   NULL
24285   recurring-jobs  1717657740  cms3Name2   NULL
24286   recurring-jobs  1717657740  cms3Name3   NULL
24287   recurring-jobs  1717657740  cms3Name4   NULL
like image 381
markzzz Avatar asked Oct 13 '25 01:10

markzzz


2 Answers

There are a couple ways to solve this, depending on your setup, but probably the best way is to solve your throughput problem. Absent of making the individual jobs more efficient or more stable in execution time, you could:

  1. Increase job throughput. You could add more parallelization to ensure jobs don't sit in the queue for greater than your requirements. To achieve this you can either increase the WorkerCount for your configuration of the Hangfire Server which enables more jobs to run in parallel on a single instance or you can add more instances of your server application. Of course your code and deployment scenario would need to support this.

  2. Decrease total job queue by not executing everything every minute.

  3. It is not my preferred method, but you could use the Hangfire monitoring Api in the beginning of your job methods to delete additional enqueued instances of the same job. There are a couple prerequisites to this. First, every job needs a unique identifier that you control. This means if you call the same method with different parameters then each of those method calls needs a unique Hangfire name. Second, you'll have to inject an instance of IBackgroundJobClient into your jobs so each job can manage the queue. Third, the method itself has to somehow be aware of the way in which it was called. You may want to include PerformContext as a method argument so the job knows a bit about itself. Finally, you can get a reference to the Hangfire monitoring api and delete duplicate jobs with something like this:

IMonitoringApi api = JobStorage.Current.GetMonitoringApi();
var enqueuedJobs = api.EnqueuedJobs("yourQueueName", 0, 50);

// The way you filter in the where clause is up to you here. You'll have access to invocation data for each job so you could filter on parameters supplied matching those of the current method running
foreach(var job in enqueuedJobs.Where(j => j.Value.Job.ToString() == jobName))
{
    _jobClient.Delete(job.Key);
}
like image 198
Christopher Rhoads Avatar answered Oct 14 '25 16:10

Christopher Rhoads


You should always make your background Job reentrant. Suppose you have a list of tasks to execute. It's your responsability to track the execution status of the task. Every new schedule should take care about the tasks that are not executed yet.

Small example. Let say that you have a job that sends welcome email to newly registered user. You will have to track the status of this email in some persistent storage. Let say for simplicity that we have a column in our User table that holds the SendWelcomEmailStatus :

public enum SendWelcomEmailStatus
{
    ToDo = 0,
    Executing = 1,
    Done=2,
    Error=3
}

The first things to do in your job, is to read all Users with the status == SendWelcomEmailStatus.Todo (it's intentionally equal to 0 to be the default value when creating a new user), and then updating the value to SendWelcomEmailStatus.Executing. When the jobs ends you should update the value to Done or Error. You can have a separate job that handle errors value for retry.

This is your responsability, you should always design you execution to be reentrant as I said. Assume that multiple executions can occure on the same time.

Some Hangfire best practice : https://docs.hangfire.io/en/latest/best-practices.html

Sorry for my english, don't hesitate if you need more details, I can elaborate more. You can explain exactly what your job do every 1 minute so I can give you more enhancement ideas.

like image 24
saad Avatar answered Oct 14 '25 15:10

saad