I assume a single row for each flight? If so:
IF EXISTS (SELECT * FROM Bookings WHERE FLightID = @Id)
BEGIN
--UPDATE HERE
END
ELSE
BEGIN
-- INSERT HERE
END
I assume what I said, as your way of doing things can overbook a flight, as it will insert a new row when there are 10 tickets max and you are booking 20.
Take a look at MERGE command. You can do UPDATE
, INSERT
& DELETE
in one statement.
Here is a working implementation on using MERGE
- It checks whether flight is full before doing an update, else does an insert.
if exists(select 1 from INFORMATION_SCHEMA.TABLES T
where T.TABLE_NAME = 'Bookings')
begin
drop table Bookings
end
GO
create table Bookings(
FlightID int identity(1, 1) primary key,
TicketsMax int not null,
TicketsBooked int not null
)
GO
insert Bookings(TicketsMax, TicketsBooked) select 1, 0
insert Bookings(TicketsMax, TicketsBooked) select 2, 2
insert Bookings(TicketsMax, TicketsBooked) select 3, 1
GO
select * from Bookings
And then ...
declare @FlightID int = 1
declare @TicketsToBook int = 2
--; This should add a new record
merge Bookings as T
using (select @FlightID as FlightID, @TicketsToBook as TicketsToBook) as S
on T.FlightID = S.FlightID
and T.TicketsMax > (T.TicketsBooked + S.TicketsToBook)
when matched then
update set T.TicketsBooked = T.TicketsBooked + S.TicketsToBook
when not matched then
insert (TicketsMax, TicketsBooked)
values(S.TicketsToBook, S.TicketsToBook);
select * from Bookings
Pass updlock, rowlock, holdlock hints when testing for existence of the row.
begin tran /* default read committed isolation level is fine */
if not exists (select * from Table with (updlock, rowlock, holdlock) where ...)
/* insert */
else
/* update */
commit /* locks are released here */
The updlock hint forces the query to take an update lock on the row if it already exists, preventing other transactions from modifying it until you commit or roll back.
The holdlock hint forces the query to take a range lock, preventing other transactions from adding a row matching your filter criteria until you commit or roll back.
The rowlock hint forces lock granularity to row level instead of the default page level, so your transaction won't block other transactions trying to update unrelated rows in the same page (but be aware of the trade-off between reduced contention and the increase in locking overhead - you should avoid taking large numbers of row-level locks in a single transaction).
See http://msdn.microsoft.com/en-us/library/ms187373.aspx for more information.
Note that locks are taken as the statements which take them are executed - invoking begin tran doesn't give you immunity against another transaction pinching locks on something before you get to it. You should try and factor your SQL to hold locks for the shortest possible time by committing the transaction as soon as possible (acquire late, release early).
Note that row-level locks may be less effective if your PK is a bigint, as the internal hashing on SQL Server is degenerate for 64-bit values (different key values may hash to the same lock id).
i'm writing my solution. my method doesn't stand 'if' or 'merge'. my method is easy.
INSERT INTO TableName (col1,col2)
SELECT @par1, @par2
WHERE NOT EXISTS (SELECT col1,col2 FROM TableName
WHERE col1=@par1 AND col2=@par2)
For Example:
INSERT INTO Members (username)
SELECT 'Cem'
WHERE NOT EXISTS (SELECT username FROM Members
WHERE username='Cem')
Explanation:
(1) SELECT col1,col2 FROM TableName WHERE col1=@par1 AND col2=@par2 It selects from TableName searched values
(2) SELECT @par1, @par2 WHERE NOT EXISTS It takes if not exists from (1) subquery
(3) Inserts into TableName (2) step values
I finally was able to insert a row, on the condition that it didn't already exist, using the following model:
INSERT INTO table ( column1, column2, column3 )
(
SELECT $column1, $column2, $column3
WHERE NOT EXISTS (
SELECT 1
FROM table
WHERE column1 = $column1
AND column2 = $column2
AND column3 = $column3
)
)
which I found at:
http://www.postgresql.org/message-id/[email protected]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With