Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

JPA synchronization between micro-service instances

I have a timer based job

@Component
public class Worker {

    @Scheduled(fixedDelay = 100)
    public void processEnvironmentActions() {
        Job job = pickJob();
    }

    public Job pickJob() {
        Job job = jobRepository.findFirstByStatus(Status.NOT_PROCESSED);
        job.setStatus(Status.PROCESSING);
        jobRepository.save(job);
        return job;
    }
}

Now, in the most situations this should give me correct result. But what will happen if there are two instances of microservice executing this piece of code at the same time?

How do I make sure that even if there are multiple instances of service, the repository should always give one job to only one instance and not other instances.

EDIT: I think people are getting confused/concentrated over @Transactional so removed it. The question remains the same.

like image 688
Ganesh Satpute Avatar asked Dec 18 '18 09:12

Ganesh Satpute


4 Answers

But what will happen if there are two instances of microservice executing this piece of code at the same time?

As so often the answer is: It depends.

All this assumes your code runs inside a transaction

  1. Optimistic locking.

    If your Job entity hast a version attribute, i.e. an attribute annotated with the @Version annotation. Optimistic locking is enabled. If to processes access the same job one will notice the versions attribute changed when trying to persist the changed job entity and fail with an OptimisticLockingException. All you have to do is handle that exception so you process doesn't die but tries again to get the next Job.

  2. No (JPA level) locking.

    If the Job entity doesn't have a version attribute, JPA will be default not apply any locking. The second process accessing a Job would issue an update, that essentially is a NOOP, since the first process already updated it. Neither will notice the problem. You probably want to avoid this.

  3. Pessimistic locking

    A pessimistic_write lock will prevent anyone from reading the entity before your done reading and writing it (at least that is my understanding of the JPA spec). Therefore this should avoid the second process to be able to select the row before the first process is done writing it. This probably blocks the whole second process. So make sure the transaction holding such a lock is short.

    In order to obtain such a lock annotate the repository method findFirstByStatus with @Lock(LockModeType.PESSIMISTIC_WRITE).

Of course, there might be libraries or frameworks out there that handle these kinds of details for you.

like image 90
Jens Schauder Avatar answered Oct 06 '22 23:10

Jens Schauder


@Jens Schauder 's answer points me in the right direction. Let me share the code so it guides other people. This is how I solved my problem, I changed my job class as below

@Entity 
public class Job {
   @Version
   private Long version = null; 
   // other fields omitted for bervity
}

Now, let's trace the following code

@Transactional
public Job pickJob() {
    Job job = jobRepository.findFirstByStatus(Status.NOT_PROCESSED);
    job.setStatus(Status.PROCESSING);
    Job saved jobRepository.save(job);
    return saved;
}

Note: Make sure you return the saved object and not the job object. If you return the job object, it'll fail for second save operation as the version count that was for job will be behind than that for saved.

.

Service 1                                 Service 2 

1. Read Object (version = 1)              1. Read Object (version = 1)
2. Change the object and save 
      (changes the version)
3. Continues to process                   2. Change the object and save 
                                               (this operation fails as 
                                                the version that was read 
                                                was 1 but in the DB version is 2)
                                          3. Skip the job processing 

This way the job will picked up by only one process.

like image 42
Ganesh Satpute Avatar answered Oct 07 '22 00:10

Ganesh Satpute


I'm not familiar to spring-batch but apparently spring-batch implements optimistic locking so the save operation will fail if another thread already picked the same job.

See spring batch horizontal scaling

like image 1
Conffusion Avatar answered Oct 07 '22 01:10

Conffusion


I agree with @Conffusion answer, but without knowing about the flush policy you should use jobRepository.saveAndFlush(job) method so that you are sure sql statements are pushed down to the database.

see also Difference between save and saveAndFlush in Spring data jpa

like image 1
gtosto Avatar answered Oct 07 '22 00:10

gtosto