A lot of Database schemas seem to follow the following standard:
(2^n)-1 for large fields:
varchar(511)
varchar(255)
varchar(127)
...then (2^n) for smaller ones
varchar(64)
varchar(32)
varchar(16)
varchar(8)
I understand why numbers of (2^n)-1 are used, what I don't understand is why it is not necessary to continue the trend down to the small fields.
E.g.
varchar(63)
varchar(31)
varchar(15)
varchar(7)
Is there a reason for this or is it just that the returns have diminished too far?
I remember the old times, when using 2^n Length was better vor alignment of blocks on disk or memory. Aligned block were faster. Today "Block" sizes are bigger and memory and disk are fast enough to ignore the alignment, exept for very large blocks that ist. (Whatever "very large" means today....)
Nowadays it is just traditional to do so.
And another reason my be the famous saying: There are only 10 type of people: Those who can binary and the others.
And 2^n -1 are candidates for mersenne primes. so its geeky too...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With