Let's say I have a trigger configured this way:
<bean id="updateInsBBTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean"> <property name="jobDetail" ref="updateInsBBJobDetail"/> <!-- run every morning at 5 AM --> <property name="cronExpression" value="0 0 5 * * ?"/> </bean>
The trigger have to connect with another application and if there is any problem (like a connection failure) it should to retry the task up to five times every 10 minutes or until success. There is any way to configure the trigger to work like this?
A misfire occurs if a persistent trigger “misses” its firing time because of the scheduler being shutdown, or because there are no available threads in Quartz's thread pool for executing the job. The different trigger types have different misfire instructions available to them.
You have to reschedule the job by creating a new trigger. This will replace the same job with a new trigger fire time.
Source: Automatically Retry Failed Jobs in Quartz
If you want to have a job which keeps trying over and over again until it succeeds, all you have to do is throw a JobExecutionException with a flag to tell the scheduler to fire it again when it fails. The following code shows how:
class MyJob implements Job { public MyJob() { } public void execute(JobExecutionContext context) throws JobExecutionException { try{ //connect to other application etc } catch(Exception e){ Thread.sleep(600000); //sleep for 10 mins JobExecutionException e2 = new JobExecutionException(e); //fire it again e2.setRefireImmediately(true); throw e2; } } }
It gets a bit more complicated if you want to retry a certain number of times. You have to use a StatefulJob and hold a retryCounter in its JobDataMap, which you increment if the job fails. If the counter exceeds the maximum number of retries, then you can disable the job if you wish.
class MyJob implements StatefulJob { public MyJob() { } public void execute(JobExecutionContext context) throws JobExecutionException { JobDataMap dataMap = context.getJobDetail().getJobDataMap(); int count = dataMap.getIntValue("count"); // allow 5 retries if(count >= 5){ JobExecutionException e = new JobExecutionException("Retries exceeded"); //make sure it doesn't run again e.setUnscheduleAllTriggers(true); throw e; } try{ //connect to other application etc //reset counter back to 0 dataMap.putAsString("count", 0); } catch(Exception e){ count++; dataMap.putAsString("count", count); JobExecutionException e2 = new JobExecutionException(e); Thread.sleep(600000); //sleep for 10 mins //fire it again e2.setRefireImmediately(true); throw e2; } } }
I would recommend an implementation like this one to recover the job after a fail:
final JobDataMap jobDataMap = jobCtx.getJobDetail().getJobDataMap(); // the keys doesn't exist on first retry final int retries = jobDataMap.containsKey(COUNT_MAP_KEY) ? jobDataMap.getIntValue(COUNT_MAP_KEY) : 0; // to stop after awhile if (retries < MAX_RETRIES) { log.warn("Retry job " + jobCtx.getJobDetail()); // increment the number of retries jobDataMap.put(COUNT_MAP_KEY, retries + 1); final JobDetail job = jobCtx .getJobDetail() .getJobBuilder() // to track the number of retries .withIdentity(jobCtx.getJobDetail().getKey().getName() + " - " + retries, "FailingJobsGroup") .usingJobData(jobDataMap) .build(); final OperableTrigger trigger = (OperableTrigger) TriggerBuilder .newTrigger() .forJob(job) // trying to reduce back pressure, you can use another algorithm .startAt(new Date(jobCtx.getFireTime().getTime() + (retries*100))) .build(); try { // schedule another job to avoid blocking threads jobCtx.getScheduler().scheduleJob(job, trigger); } catch (SchedulerException e) { log.error("Error creating job"); throw new JobExecutionException(e); } }
Why?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With