In my PHP application ( constructed using symfony framework and Propel ORM), when I add a record to the MYSQL database, I need to update an external MYSQL database using web service API provided by the external vendor.
The question is what is the best practice to maintain the database integrity. Take for example, if the first update is successful, and the second update is not, due to the fact that web service isn't available, I must be able to either
Specifically, I am looking for syntax like
void RootMethod()
{
using(TransactionScope scope = new TransactionScope())
{
try
{
SomeMethod();
scope.Complete();
CallWebService();
}
catch
{
scope.abort();
}
}
}
But not sure whether
What do you think?
Edit: Some asked me why there is a need for two part update. It's because I am creating a front end application connecting to an existing back end application. And I don't want to change the back end application. So inevitably there will be some overlap. Therefore there is a need to synch the data
Another Edit: The two parts transaction must be done together, doing a cron job to synchronize the table is not desirable
The big question is whether duplicate updates to the web service matter, and if they do whether they can be detected. If you can detect duplicates (usually with a unique transaction number) or if duplicates don't matter, then you can build a two-phase commit style approach which is reliable.
If duplicate transactions to the web service cannot be detected and the updates are not idempotent then you are out of luck.
This is the basic algorithm:
begin transaction;
do local work;
save information for external call;
set an appropriate time for next attempt;
mark external call as not performed;
commit work;
begin transaction;
make external call;
if successful
mark external call as performed (or delete the record)
else
set the time for the next attempt
commit;
You then need a regular task, thread, or whatever that does something like this:
for each record where the time for the next attempt <= now
begin work;
if the remote service has not performed this transaction
make the remote call;
if successful
mark as done;
else if too many attempts
mark the transaction as permanently failed;
alert operator;
else
set the time for the next attempt;
endif
else
mark as done;
endif
commit;
endfor
This approach handles all the failure conditions reliably and ensures that both pieces of work are eventually done.
The basic failures:
A failure before the first commit completes: Everything rolls back.
A failure after the first commit but before the web service completes (this includes transient failures in the web service itself): The remote webservice transaction is replayed by the recovery task.
A failure after the web service completes but before the second commit completes: The duplicate web service transaction is detected by the recovery task and the local record is dequeued.
Failures in the recovery task: Essentially the same as failures in the second transaction.
Other notes:
A gradual back-off approach is useful for failures. If there is a transient failure on a service you want to slow down your retries.
If you have an ordering requirement on the external service you might need some additional structure.
Depending on how you implement the recovery task, you could just leave the web service calls to that task and not have the second transaction in the main application flow.
Response to the additional requirement: "The two parts transaction must be done together, doing a cron job to synchronize the table is not desirable"
My reading of this requirement is: "the two systems should never fail."
When one or both of the systems fail, you need something to pick up the pieces and reconcile things. You can use a fully fledged TP monitor to do transaction coordination, or you can build a simple monitor like the one in my example that handles your specific case. Either way, there is something that keeps track of what was happening so that things can be resolved correctly after a failure condition.
If your requirement really is that things always happen together (and a transactional message queue or two-phase commit approach does not work for you), you would be better off storing the data for both systems in the same database (aka "resource manager") and having a single resource manager transaction.
If you do get a solution to this problem that meets the requirements of having two separate systems consistent across multiple transactions and never requires subsequent reconciliation after a failure, you should write it up and get it published in The VLDB Journal, ACM TODS or IEEE TKDE.
This is going to be tricky. You need 2 phase commit for a reliable solution, but that would be a ton of work to implement for your specific needs.
Maybe a great solution is not actually called for. Are you under difficult performance constraints? Generally transactions should be short in time... but maybe you should keep the transaction open around the webservice call? This would reduce the overall throughput of the database (at the least)... but that might be perfectly acceptable.
The approach you showed will have problems handling hard system failures (power failures, hardware faults, etc). To work around that you'll need to add tracking to your main database and a background process/startup process to handle failures. Pretty fiddly to do, but certainly possible.
Some failures might end up not being fixable (first part succeeded, second part failed, first part can't be undone because another transaction has updated the same data). It all depends on your precise business rules. An accounting system would be the easiest because undo'ng a transaction is actually done as offseting records and not updates.
Good luck.
I don't think rolling back is really going to help the situation. If your web service is down, making more calls is only going to compound the matter, and then you have to worry about whether or not your rollback went through etc.
I would do this with scheduled full syncs. What is your margin for error? Are you willing for the database to be slightly out of sync? By how much? Would it be a big deal to have a synchronizer run every night or so to fix any issues that cropped up? How often is the web service down that you have to worry about this?
The update queue for failed web service calls is a decent idea, but if your web service is down, there's probably going to be a ton of those at once, not just one or two, so you might as well do a full sync after an outage anyway.
Really your answer depends on those questions. Don't make the mistake of assuming your entire program will crash if things are 0.01% out of sync for 10 minutes. Figure out what acceptable margins for errors are.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With