I am aware of COLUMNS_UPDATED
, well I need some quick shortcut (if anyone has made, I am already making one, but if anyone can save my time, I will appriciate it)
I need basicaly an XML of only updated column values, I need this for replication purpose.
SELECT * FROM inserted gives me each column, but I need only updated ones.
something like following...
CREATE TRIGGER DBCustomers_Insert
ON DBCustomers
AFTER UPDATE
AS
BEGIN
DECLARE @sql as NVARCHAR(1024);
SET @sql = 'SELECT ';
I NEED HELP FOR FOLLOWING LINE ...., I can manually write every column, but I need
an automated routin which can work regardless of column specification
for each column, if its modified append $sql = ',' + columnname...
SET @sql = $sql + ' FROM inserted FOR XML RAW';
DECLARE @x as XML;
SET @x = CAST(EXEC(@sql) AS XML);
.. use @x
END
Using a SQL Server trigger to check if a column is updated, there are two ways this can be done; one is to use the function update(<col name>) and the other is to use columns_updated().
In SQL Server, you can create DML triggers that execute code only when a specific column is updated. The trigger still fires, but you can test whether or not a specific column was updated, and then run code only if that column was updated. You can do this by using the UPDATE() function inside your trigger.
SQL Server COLUMNS_UPDATED() Function for Triggers. This function is used to know the inserted or updated columns of a table or view. It returns a VARBINARY stream that by using a bitmask allows you to test for multiple columns.
1- Use Update Command in your Trigger. When you use update command for table SCHEDULE and Set QtyToRepair Column to new value, if new value equal to old value in one or multi row, solution 1 update all updated row in Schedule table but solution 2 update only schedule rows that old value not equal to new value.
I've another completely different solution that doesn't use COLUMNS_UPDATED at all, nor does it rely on building dynamic SQL at runtime. (You might want to use dynamic SQL at design time but thats another story.)
Basically you start with the inserted and deleted tables, unpivot each of them so you are just left with the unique key, field value and field name columns for each. Then you join the two and filter for anything that's changed.
Here is a full working example, including some test calls to show what is logged.
-- -------------------- Setup tables and some initial data --------------------
CREATE TABLE dbo.Sample_Table (ContactID int, Forename varchar(100), Surname varchar(100), Extn varchar(16), Email varchar(100), Age int );
INSERT INTO Sample_Table VALUES (1,'Bob','Smith','2295','[email protected]',24);
INSERT INTO Sample_Table VALUES (2,'Alice','Brown','2255','[email protected]',32);
INSERT INTO Sample_Table VALUES (3,'Reg','Jones','2280','[email protected]',19);
INSERT INTO Sample_Table VALUES (4,'Mary','Doe','2216','[email protected]',28);
INSERT INTO Sample_Table VALUES (5,'Peter','Nash','2214','[email protected]',25);
CREATE TABLE dbo.Sample_Table_Changes (ContactID int, FieldName sysname, FieldValueWas sql_variant, FieldValueIs sql_variant, modified datetime default (GETDATE()));
GO
-- -------------------- Create trigger --------------------
CREATE TRIGGER TriggerName ON dbo.Sample_Table FOR DELETE, INSERT, UPDATE AS
BEGIN
SET NOCOUNT ON;
--Unpivot deleted
WITH deleted_unpvt AS (
SELECT ContactID, FieldName, FieldValue
FROM
(SELECT ContactID
, cast(Forename as sql_variant) Forename
, cast(Surname as sql_variant) Surname
, cast(Extn as sql_variant) Extn
, cast(Email as sql_variant) Email
, cast(Age as sql_variant) Age
FROM deleted) p
UNPIVOT
(FieldValue FOR FieldName IN
(Forename, Surname, Extn, Email, Age)
) AS deleted_unpvt
),
--Unpivot inserted
inserted_unpvt AS (
SELECT ContactID, FieldName, FieldValue
FROM
(SELECT ContactID
, cast(Forename as sql_variant) Forename
, cast(Surname as sql_variant) Surname
, cast(Extn as sql_variant) Extn
, cast(Email as sql_variant) Email
, cast(Age as sql_variant) Age
FROM inserted) p
UNPIVOT
(FieldValue FOR FieldName IN
(Forename, Surname, Extn, Email, Age)
) AS inserted_unpvt
)
--Join them together and show what's changed
INSERT INTO Sample_Table_Changes (ContactID, FieldName, FieldValueWas, FieldValueIs)
SELECT Coalesce (D.ContactID, I.ContactID) ContactID
, Coalesce (D.FieldName, I.FieldName) FieldName
, D.FieldValue as FieldValueWas
, I.FieldValue AS FieldValueIs
FROM
deleted_unpvt d
FULL OUTER JOIN
inserted_unpvt i
on D.ContactID = I.ContactID
AND D.FieldName = I.FieldName
WHERE
D.FieldValue <> I.FieldValue --Changes
OR (D.FieldValue IS NOT NULL AND I.FieldValue IS NULL) -- Deletions
OR (D.FieldValue IS NULL AND I.FieldValue IS NOT NULL) -- Insertions
END
GO
-- -------------------- Try some changes --------------------
UPDATE Sample_Table SET age = age+1;
UPDATE Sample_Table SET Extn = '5'+Extn where Extn Like '221_';
DELETE FROM Sample_Table WHERE ContactID = 3;
INSERT INTO Sample_Table VALUES (6,'Stephen','Turner','2299','[email protected]',25);
UPDATE Sample_Table SET ContactID = 7 where ContactID = 4; --this will be shown as a delete and an insert
-- -------------------- See the results --------------------
SELECT *, SQL_VARIANT_PROPERTY(FieldValueWas, 'BaseType') FieldBaseType, SQL_VARIANT_PROPERTY(FieldValueWas, 'MaxLength') FieldMaxLength from Sample_Table_Changes;
-- -------------------- Cleanup --------------------
DROP TABLE dbo.Sample_Table; DROP TABLE dbo.Sample_Table_Changes;
So no messing around with bigint bitfields and arth overflow problems. If you know the columns you want to compare at design time then you don't need any dynamic SQL.
On the downside the output is in a different format and all the field values are converted to sql_variant, the first could be fixed by pivoting the output again, and the second could be fixed by recasting back to the required types based on your knowledge of the design of the table, but both of these would require some complex dynamic sql. Both of these might not be an issue in your XML output. This question does something similar to getting the output back in the same format.
Edit: Reviewing the comments below, if you have a natural primary key that could change then you can still use this method. You just need to add a column that is populated by default with a GUID using the NEWID() function. You then use this column in place of the primary key.
You may want to add an index to this field, but as the deleted and inserted tables in a trigger are in memory it might not get used and may have a negative effect on performance.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With