I have a design pattern I have been struggling with on how best to prevent duplicate posting of data.
Here are the steps:
Here are the scenarios: Client submits data with guid "1", and then resubmits data with guid "1" before step (5) is hit for the original data submission, then the transaction is processed twice.
What is the best design pattern to prevent this without using semaphores or blocking? The user should be able to resubmit, in case the first submission fails for some reason (hardware issue on the server side, etc.).
Thanks!
Store the GUID in a column with an SQL UNIQUE
constraint.
When you attempt (within the transaction) to insert a second duplicate GUID, the operation will fail, at which point you roll back the entire transaction.
You could implement step 2 by using a query which reads uncommitted data. For instance, if you are using MS SQL Server, you could do this:
IF NOT EXIST(SELECT * FROM SomeTable (NOLOCK) WHERE Guid = @ClienGUID)
BEGIN
-- Insert the GUID ASAP in the transaction so that the next query will read it
-- Do steps 3-5
END
The key here is the (NOLOCK) hint, which reads uncommitted data
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With