A normalized table should have less number columns and can have reference fields as much as possible. Is it right approach? Is there any relationship between number of columns and a good normalization process?
There isn't a number that's too many. That said, it sounds like, based on your description, that you're not dealing with a properly structured table. 180 columns to define a user and 280 columns to define a thing... That can't possibly be normalized or a fact table.
Having too many columns results in a lot nulls (evil) and an unwieldy object the table is mapped to. This hurts readability in the IDE and hinders maintenance (increasing development costs).
There is no precise guidance. A table could be as little as one column or as many as the max, 1024. However, in general, you'll probably see no more than 10-15 columns in a table in a well normalized database.
Answer. For the columns in a table, there is a maximum limit of 1024 columns in a table. SQL Server does have a wide-table feature that allows a table to have up to 30,000 columns instead of 1024.
You should follow the normalization principles rather than be concerned with the sheer number of columns in a table. The business requirements will drive the entities, their attributes and their relationship and no absolute number is the "correct" one.
Is there any relationship between number of columns and a good normalization process?
In short, no. A 3NF normalized table will have as many columns as it needs, provided that
data within the table is dependent on the key, the whole key, and nothing but the key (so help me Codd).
There are situations where (some) denormalization may actually improve performance and the only real measure of when this should be done is to test it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With