I need to figure out a workaround for a design mistake that it's too late to fix. I basically expect duplicate data coming into a table that's specifically structured to avoid it:
CREATE TABLE building (
building_id INT IDENTITY(1, 1) NOT NULL,
address_id INT NOT NULL,
company_id INT NOT NULL,
CONSTRAINT building_pk PRIMARY KEY (building_id),
CONSTRAINT building_fk1 FOREIGN KEY (address_id) REFERENCES address (address_id),
CONSTRAINT building_fk2 FOREIGN KEY (company_id) REFERENCES company (company_id)
);
CREATE INDEX building_idx1 ON building (address_id);
CREATE INDEX building_idx2 ON building (company_id);
/* Prevent dupes */
ALTER TABLE building ADD CONSTRAINT building_uk1 UNIQUE (address_id);
(Original localised names have been changed to make them more clear and relevant to the question.)
The export buildings API at Acme Inc. will be sending different buildings that share the same address. Removing building_uk1
would break certain features that assume uniqueness and we can't afford a major rewrite at this point.
I'd like to evaluate the effects of disabling building_uk1
just for the aforementioned company (company_id=314
), which may do without the features that depend on uniqueness. What are my options to relax building_uk1
and only enforce it when company_id
is not 314?
A simple unique filtered index should be enough:
CREATE UNIQUE NONCLUSTERED INDEX [IX_building_uk1] ON [dbo].[building]
(
[address_id] ASC
)
WHERE (company_id <> 314)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With