I am designing a User
table in my database. I have about 30 or so options for each user that can be either "allow" or "disallow".
My question is should I store these as 30 bit
columns or should I use a single int
column to store them and parse out each bit in my application?
Also, our database is SQL Server 2008 and 2005 (depending on environment)
I just tried creating two tables, one with a single int column and one with 30 bit columns then added a row to each and looked at them with SQL Server Internals Viewer
CREATE TABLE T_INT(X INT DEFAULT 1073741823);
CREATE TABLE T_BIT(
X1 BIT DEFAULT 1,
/*Other columns omitted for brevity*/
X30 BIT DEFAULT 1
);
INSERT INTO T_INT DEFAULT VALUES;
INSERT INTO T_BIT DEFAULT VALUES;
Single row for table with 30 Bit Columns
Single row for table with one int Column
From a storage point of view SQL Server combines the bit columns and the data is stored in exactly the same amount of space (yellow). You do end up losing 3 bytes a row for the NULL bitmap (purple) though as the length of this is directly proportional to the number of columns (irrespective of whether they allow nulls)
Key for fields (for the int version, colour coding is the same for the bit version)
Neither -- unless you have a major space issue or compatibility requirement with some other system, think about how this will prevent you from optimizing your queries and clearly understanding what each bit represents.
You can have more than a thousand columns in a table, or you can have a child table for user settings. Why limit yourself to 30 bits that you need to parse in your app? Imagine what kind of changes you'll need to make to the app if several of these settings are deprecated or a couple of new ones introduced.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With