Referring to the Postgres Documentation on Character Types, I am unclear on the point of specifying a length for character varying (varchar) types.
Assumption:
It does mention:
The storage requirement for a short string (up to 126 bytes) is 1 byte plus the actual string, which includes the space padding in the case of character. Longer strings have 4 bytes of overhead instead of 1. Long strings are compressed by the system automatically, so the physical requirement on disk might be less. Very long values are also stored in background tables so that they do not interfere with rapid access to shorter column values. In any case, the longest possible character string that can be stored is about 1 GB. (The maximum value that will be allowed for n in the data type declaration is less than that. It wouldn't be useful to change this because with multibyte character encodings the number of characters and bytes can be quite different.
This talks about the size of string, not the size of field, (i.e. sounds like it will always compress a large string in a large varchar field, but not a small string in a large varchar field?)
I ask this question as it would be much easier (and lazy) to specify a much larger size so you never have to worry about having a string too large. For example, if I specify varchar(50) for a place name I will get locations that have more characters (e.g. Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch), but if I specify varchar(100) or varchar(500), I'm less likley to get that problem.
So would you get a performance hit between varchar(500) and (arbitrarily) varchar(5000000) or text() if your largest string was say 400 characters long?
Also out of interest if anyone has the answer to this AND knows the answer to this for other databases, please add that too.
I have googled, but not found a sufficiently technical explanation.
Variable-length character fields have a declared maximum length and a current length that can vary while a program is running.
CHAR is conceptually a fixed-length, blank-padded string. Trailing blanks (spaces) are removed on input, and are restored on output. The default length is 1, and the maximum length is 65000 octets (bytes).
A fixed-length column requires the defined number of bytes regardless of the actual size of the data. The CHAR data type is of fixed-length. For example, a CHAR(25) column requires 25 bytes of storage for all values, so the string “This is a text string” uses 25 bytes of storage.
The CHARACTER VARYING data type stores a string of letters, digits, and symbols of varying length, where m is the maximum size of the column (in bytes) and r is the minimum number of bytes reserved for that column.
My understanding is that having constraints is useful for data integrity, therefore I use column sizes to both validate the data items at the lower layer, and to better describe the data model.
Some links on the matter:
My understanding is that this is a legacy of older databases with storage that wasn't as flexible as that of Postgres. Some would use fixed-length structures to make it easy to find particular records and, since SQL is a somewhat standardized language, that legacy is still seen even when it doesn't provide any practical benefit.
Thus, your "make it big" approach should be an entirely reasonable one with Postgres, but it may not transfer well to other less flexible RDBMS systems.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With