Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to save message into database and send response into topic eventually consistent?

I have the following rabbitMq consumer:

Consumer consumer = new DefaultConsumer(channel) {
    @Override
     public void handleDelivery(String consumerTag, Envelope envelope, MQP.BasicProperties properties, byte[] body) throws IOException {
            String message = new String(body, "UTF-8");
            sendNotificationIntoTopic(message);
            saveIntoDatabase(message);
     }
};

Following situation can occur:

  1. Message was send into topic successfully
  2. Connection to database was lost so database insert was failed.

As a result we have data inconsistency.

Expected result either both action were successfully executed or both were not executed at all.

Any solutions how can I achieve it?

P.S.

Currently I have following idea(please comment upon)

We can suppose that broker doesn't lose any messages.

We have to be subscribed on topic we want to send.

  1. Save entry into database and set field status with value 'pending'
  2. Attempt to send data to topic. If send was successfull - update field status with value 'success'
  3. We have to have a sheduled job which have to check rows with pending status. At the moment 2 cases are possible:
    3.1 Notification wasn't send at all
    3.2 Notification was send but save into database was failed(probability is very low but it is possible)

    So we have to distinquish that 2 cases somehow: we may store messages from topic in the collection and job can check if the message was accepted or not. So if job found a message which corresponds the database row we have to update status to "success". Otherwise we have to remove entry from database.

I think my idea has some weaknesses(for example if we have multinode application we have to store messages in hazelcast(or analogs) but it is additional point of hypothetical failure)

like image 433
gstackoverflow Avatar asked Jul 03 '19 10:07

gstackoverflow


1 Answers

Here is an example of Try Cancel Confirm pattern https://servicecomb.apache.org/docs/distributed_saga_3/ that should be capable of dealing with your problem. You should tolerate some chance of double submission of the data via the queue. Here is an example:

  1. Define abstraction Operation and Assign ID to the operation plus a timestamp.
  2. Write status Pending to the database (you can do this in the same step as 1)
  3. Write a listener that polls the database for all operations with status pending and older than "timeout"
  4. For each pending operation send the data via the queue with the assigned ID.
  5. The recipient side should be aware of the ID and if the ID has been processed nothing should happen.

6A. If you need to be 100% that the operation has completed you need a second queue where the recipient side will post a message ID - DONE. If such consistency is not necessary skip this step. Alternatively it can post ID -Failed reason for failure.

6B. The submitting side either waits for a message from 6A of completes the operation by writing status DONE to the database.

  • Once a sertine timeout has passed or certain retry limit has passed. You write status to operation FAIL.
  • You can potentialy send a message to the recipient side opertaion with ID rollback.

Notice that all this steps do not involve a technical transactions. You can do this with a non transactional database.

What I have written is a variation of the Try Cancel Confirm Pattern where each recipient of message should be aware of how to manage its own data.

like image 112
Alexander Petrov Avatar answered Oct 16 '22 12:10

Alexander Petrov