I'm working under refactoring of database (SQL Server 2008) scheme and gather arguments to change NCHAR(1)
columns (which keep Y|N
values) to BIT
. Everybody understand this is necessary and don't know why does it take place but this change affects production database so weighty arguments are required. Table keeps address catalog (up to 1m of records).
First argument I found - each nchar fields take 2 bytes, each 8 bit fields - 1 byte (next 8 - additional 1 byte).
What's next? Maybe some indexes performance issues?
I would hesitate to provide any arguments for such a change unless you had a good reason to make that change. i.e. you have to balance the cost of a change to what you would personally of done / prefer, vs the cost of actually implementing it and the benefits.
Have you checked whether the use of nchar(1) is hurting performance, or are you falling into the trap of premature-optimization? You are only talking about 1 million records here.
For the minor storage / IO cost you think you are incurring, consider the total man hours to change, retest and upgrade the system * hourly rate vs cost of just buying a faster disk. I suspect the disk will be far cheaper - as well as benefit every aspect of the system.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With