I have the following tables in a relational database:
[Sensor]
LocationId [PK / FK -> Location]
SensorNo [PK]
[AnalogSensor]
LocationId [PK/FK -> Sensor]
SensorNo [PK/FK -> Sensor]
UpperLimit
LowerLimit
[SwitchSensor]
LocationId [PK/FK -> Sensor]
SensorNo [PK/FK -> Sensor]
OnTimeLimit
[Reading]
LocationId [PK/FK -> Sensor]
SensorNo [PK/FK -> Sensor]
ReadingDtm [PK]
[ReadingSwitch]
LocationId [PK/FK -> Reading]
SensorNo [PK/FK -> Reading]
ReadingDtm [PK/FK -> Reading]
Switch
[ReadingValue]
LocationId [PK/FK -> Reading]
SensorNo [PK/FK -> Reading]
ReadingDtm [PK/FK -> Reading]
Value
[Alert]
LocationId [PK/FK -> Reading]
SensorNo [PK/FK -> Reading]
ReadingDtm [PK/FK -> Reading]
Basically, ReadingSwitch and ReadingValue are subtypes of Reading and SwitchSensor and AnalogSensor are subtypes of Sensor. A reading can either be a SwitchReading or ValueReading value - it cannot be both, and a Sensor can either be an AnalogSensor or a SwitchSensor.
The only way I've come across to do this so far is here.
There surely must be a nicer way to do this sort of thing.
The only other way I can think of is to not have sub types but completely expand everything:
[SwitchSensor]
LocationId [PK/FK -> Location]
SensorNo [PK]
[AnalogSensor]
LocationId [PK/FK -> Location]
SensorNo [PK]
[SwitchReading]
LocationId [PK/FK -> SwitchSensor]
SensorNo [PK/FK -> SwitchSensor]
ReadingDtm
Switch
[AnalogReading]
LocationId [PK/FK -> AnalogSensor]
SensorNo [PK/FK -> AnalogSensor]
ReadingDtm
Value
[AnalogReadingAlert]
LocationId [PK/FK -> AnalogReading]
SensorNo [PK/FK -> AnalogReading]
ReadingDtm [PK/FK -> AnalogReading]
[SwitchReadingAlert]
LocationId [PK/FK -> SwitchReading]
SensorNo [PK/FK -> SwitchReading]
ReadingDtm [PK/FK -> SwitchReading]
Which might not be so bad but I also have tables that reference the Alert table, so they too would have to be duplicated:
[AnalogReadingAlertAcknowledgement]
...
[AnalogReadingAlertAction]
...
[SwitchReadingAlartAcknowledgement]
...
[SwitchReadingAlartAction]
etc.
Does this problem make any sense to anyone??
A common technique to ensure referential integrity is to use triggers to implement cascades. A cascade occurs when an action on one table fires a trigger that in turn creates a similar action in another table, which could in turn fire another trigger and so on recursively.
Referential integrity requires that a foreign key must have a matching primary key or it must be null. This constraint is specified between two tables (parent and child); it maintains the correspondence between rows in these tables. It means the reference from a row in one table to another table must be valid.
Three types of rules can be attached to each referential constraint: an INSERT rule, an UPDATE rule, and a DELETE rule.
SQL supports the referential integrity concept with the CREATE TABLE and ALTER TABLE statements. You can use the CREATE TABLE statement or the ALTER TABLE statement to add a referential constraint. To remove a referential constraint, use the ALTER TABLE statement.
None of that is necessary, especially not the doubling up the tables. That is pure insanity.
Since the Standard for Modelling Relational Databases (IDEF1X) has been in common use for over 25 years (at least in the high quality, high performance end of the market), I use that terminology. Date & Darwen, despite1 consistent with the great work they have done to progresssuppress the Relation Model, they were unaware of IDEF1X until I brought it to their attention in 2009, and thus has a new terminology2 for the Standard terminology that we have been using for decades. Further, the new terminology does not deal with all the cases, as IDEF1X does. Therefore I use the established Standard terminology, and avoid new terminology.
Even the concept of a "distributed key" fails to recognise the underlying ordinary PK::FK Relations, their implementation in SQL, and their power.
The Relational, and therefore IDEF1X, concept is Identifiers and Migration thereof.
Sure, the vendors are not exactly on the ball, and they have weird things such a "partial Indices" etc, which are completely unnecessary when the basics are understood. But famous “academics” and “theoreticians” coming up with incomplete new concepts when the concept was standardised and give full treatment 25 years ago ... that, is unexpected and unacceptable.
IEC/ISO/ANSI SQL barely handles Codd’s 3NF (Date & Darwen’s “5NF”) adequately, and it does not support Basetype-Subtype structures at all; there are no Declarative Constraints for this (and there should be).
CHECK CONSTRAINT
s, etc (I avoid using Triggers for a number of reasons). However, I take all that into account. In order for me to effectively provide a Data Modelling service on StackOverflow, without having to preface that with a full discourse, I purposely provide models that can be implemented by capable people, using existing SQL and existing Constraints, to whatever extent they require. It is already simplified, and contains the common level of enforcement. If there is any question, just ask, and you shall receive.
We can use both the example graphic in the linked document and your fully IDEF1X-compliant Sensor Data Model
Readers who are not familiar with the Relational Modelling Standard may find IDEF1X Notation useful. Readers who think a database can be mapped to objects, classes, and subclasses are advised that reading further may cause injury. This is further than Fowler and Ambler have read.
There are two types of Basetype-Subtype structures.
Exclusive means there must be one and only one Subtype row for each Basetype row. In IDEF1X terms, there should be a Discriminator column in the Basetype, which identifies the Subtype row that exists for it.
For more than two Subtypes, this is demanded, and I implement a Discriminator column.
For two Subtypes, since this is easily derived from existing data (eg. Sensor.IsSwitch
is the Discriminator for Reading
), I do not model an additional explicit Discriminator column for Reading
. However, you are free to follow the Standard to the letter and implement a Discriminator.
I will take each aspect in detail.
The Discriminator column needs a CHECK CONSTRAINT
to ensure it is within the range of values, eg: IN ("B", "C", "D")
. IsSwitch
is a BIT
, which is 0 or 1, so that is already constrained.
Since the PK of the Basetype defines its uniqueness, only one Basetype row will be allowed; no second Basetype row (and thus no second Subtype row) can be inserted.
Therefore it is overkill, completely redundant, an additional unnecessary Index, to implement an Index such as (PK, Discriminator) in the Basetype, as your link advises. The uniqueness is in the PK, and therefore the PK plus anything will be unique).
IDEF1X does not require the Discriminator in the Subtype tables. In the Subtype, which is again constrained by the uniqueness of its PK, as per the model, if the Discriminator was implemented as a column in that table, every row in it will have the same value for the Discriminator (every Book will be "B"; every ReadingSwitch
will be an IsSwitch
). Therefore it is absurd to implement the Discriminator as a column in the Subtype. And again, completely redundant, an additional unnecessary Index, to implement an Index such as (PK, Discriminator) in the Subtype: the uniqueness is in the PK, and therefore the PK plus anything will be unique).
The method identified in the link is a ham-fisted and bloated (massive data duplication for no purpose) way of implementing Referential Integrity. There is probably a good reason the author has not seen that construct anywhere else. It is a basic failure to understand SQL and to use it as it is effectively. These "solutions" are typical of people who follow a dogma "SQL can't do ..." and thus are blind to what SQL can do. The horrors that result from Fowler and Ambler's blind "methods" are even worse.
The Subtype PK is also the FK to the Basetype, that is all that is required, to ensure that the Subtype does not exist without a parent Basetype.
The SQL CHECK CONSTRAINT
is limited to checking the inserted row. We need to check the inserted row against other rows, either in the same table, or in another table. Therefore a 'User Defined' Function is required.
Write a simple UDF that will check for existence of the PK and the Discriminator in the Basetype, and return 1 if EXITS
or 0 if NOT EXITS
. You will need one UDF per Basetype (not per Subtype).
In the Subtype, implement a CHECK CONSTRAINT
that calls the UDF, using the PK (which is both the Basetype and the Subtype) and the Discriminator value.
I have implemented this in scores of large, real world databases, on different SQL platforms. Here is the 'User Defined' Function Code, and the DDL Code for the objects it is based on.
This particular syntax and code is tested on Sybase ASE 15.0.2 (they are very conservative about SQL Standards compliance).
I am aware that the limitations on 'User Defined' Functions are different for every SQL platform. However, this is the simplest of the simple, and AFAIK every platform allows this construct. (No idea what the Non-SQLs do.)
yes, of course this clever little technique can be used implement any non-trivial data rule that you can draw in a Data Model. In particular, to overcome the limitations of SQL. Note my caution to avoid two-way Constraints (circular references).
Therefore the CHECK CONSTRAINT
in the Subtype, ensures that the PK plus the correct Discriminator exists in Basetype. Which means that only that Subtype exists for the Basetype (the PK).
Any subsequent attempt to insert another Subtype (ie. break the Exclusive Rule) will fail because the PK+Discriminator does not exist in the Basetype.
Any subsequent attempt to insert another row of the same Subtype is prevented by the uniqueness of its PK Constraint.
The only bit that is missing (not mentioned in the link) is the Rule "every Basetype must have at least one Subtype" is not enforced. This is easily covered in Transactional code (I do not advise Constraints going in two directions, or Triggers); use the right tool for the job.
The Basetype (parent) can host more than one Subtype (child)
There is no single Subtype to be identified.
The Discriminator does not apply to Non-exclusive Subtypes.
The existence of a Subtype is identified by performing an existence check on the Subtype table, using the Basetype PK.
Simply exclude the CHECK CONSTRAINT
that calls the UDF above.
PRIMARY KEY
, FOREIGN KEY
, and the usual Range CHECK CONSTRAINT
s, adequately support all requirements for Non-exclusive Subtypes. For further detail; a diagrammatic overview including details; and the distinction between Subtypes and Optional Column tables, refer to this Subtype document.
I, too, was taken in by C J Date's and Hugh Darwen's constant references to "furthering" the Relational Model. After many years of interaction, based on the mountain of consistent evidence, I have concluded that their work is in fact, a debasement of it. They have done nothing to further Dr E F Codd's seminal work, the Relational Model, and everything to damage and suppress it.
They have private definitions for Relational terms, which of course severely hinders any communication. They have new terminology for terms we have had since 1970, in order to appear that they have "invented" it. Typical of frauds and thieves.
This section can be skipped by all readers who did not comment.
Unfortunately, some people are so schooled in doing things the wrong way (as advised by the freaks who pass for “theoreticians” in this space), at massive additional cost, that even when directed clearly in the right way, they cannot understand it. Perhaps that is why proper education cannot be substituted with a Question-and Answer format.
Sam: I’ve noticed that this approach doesn't prevent someone from using
UPDATE
to change a Basetype's discriminator value. How could that be prevented? TheFOREIGN KEY
+ duplicate Discriminator column in subtypes approach seems to overcome this.
Yes. This Method doesn't prevent someone using UPDATE
to change a Key, or a column in some unrelated table, or headaches, either. It answers a specific question, and nothing else. If you wish to prevent certain DML commands or whatever, use the SQL facility that is designed for that purpose. All that is way beyond the scope of this question. Otherwise every answer has to address every unrelated issue.
Answer. Since we should be using Open Architecture Standards, available since 1993, all changes to the db are via ACID Transactions, only. That means direct INSERT/UPDATE/DELETE
, to all tables are prohibited; the data retains Integrity and Consistency (ACID terminology). Otherwise, sure, you have a bleeding mess, such as your eg. and the consequences. Those freaks do not understand Transactions, they understand only single file INSERT/UPDATE/DELETE
. Again, out-of-scope. If you need more details, please open a new question, and I will answer it in detail.
Further, the FK+Duplicate D+Duplicate Index (and the massive cost therein !) does nothing of the sort, I don't know where you got "seems" from.
dtheodor: This question is about referential integrity. Referential integrity doesn't mean "check that the reference is valid on insert and the forget about it". It means "maintain the validity of the reference forever". The duplicate discriminator + FK method guarantees this integrity, your UDF approach does not. It's without question that
UPDATE
s should not break the reference.
The problem here is two-fold. First, you need basic education in other areas re Relational Databases and Open Architecture Standards. Second, you need de-programming, because even though I have given you the answer, you do not understand it, you are slavishly repeating the cult mantra, that this particular Method does not do what the massively inefficient cult method does. Short of repeating my request to open a new question, and thus provide a complete answer to that other area of Relational Databases that you evidently do not understand (no one in the Date & Darwen cult understand the basics of Relational Databases), I really do not know what to do.
Ok, short answer, that really belongs in another question How is the Discriminator in Exclusive Subtypes Protected from an Invalid UPDATE ?
Clarity. Yes, Referential integrity doesn't mean "check that the reference is valid on insert and the forget about it”. I didn’t say that it meant that either.
Referential Integrity means the References in the database FOREIGN KEY
has Integrity with the PRIMARY KEY
that it references.
Declarative Referential Integrity means the declared References in the database ...
CONSTRAINT
FOREIGN KEY... REFERENCES ...
CONSTRAINT CHECK ...
are maintained by the RDBMS platform and not by the application code.
It does not mean "maintain the validity of the reference forever” either.
The original Question regards RI for Subtypes, and I have Answered it, providing DRI.
Your question does not regard RI or DRI.
Your Question, although asked incorrectly, because you are expecting the Method to provide what the Method does not provide, and you do not understand that your requirement is fulfilled by other means, is How is the Discriminator in Exclusive Subtypes Protected from an Invalid UPDATE ?
The Answer is, use the Open Architecture Standards that we should be using since 1993. That prevents all invalid UPDATE
s. If you bothered to read the linked documents, and understand them, your concern is a non-issue, it does not exist. That is the short answer.
But you did not understand the short answer the first and second time, in order to avoid having you repeating the mantra a third time, I will have to explain it. Here. In the wrong place.
No one is allowed to walk up to the database and change a column here or a value there. Using either SQL directly or an app that uses SQL directly. If that were allowed, you will not have a secured database, you will have a prostitute in a cheap brothel.
All updates (lower case) to the database (including multi-row INSERT/UPDATE/DELETE
) are implemented as ACID SQL Transactions. And nothing but Transactions. The set of Transactions constitute the Database API, that is exposed to any application that uses the database.
SQL has ACID Transactions. Non-SQLs do not have Transactions. Your cult loves Non-SQLs. They know absolutely nothing about Transactions, let aloe Open Architecture. Their Non-architecture is a monolithic stack. And a “database” that gets refactored every month.
Since the only Transactions that you write will insert the basetype+subtype in a single Transaction, as a single Logical Unit of Work, the Integrity (data Integrity, not Referential Integrity) of the basetype::subtype relation is maintained, and maintained within the database. Therefore all updates to the database will be Valid, there will not be any Invalid updates.
Since you are not so stupid as to write code that UPDATE
s the Discriminator column in a single row without the attendant DELETE Previous_Subtype
, place it in a Transaction, and GRANT EXEC
permission for it to user ROLES
, there will not be an Invalid Discriminator anywhere in the database.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With