I tried importing a database dump from a SQL file and the insert failed when inserting the string Mér
into a field defined as varying(3)
. I didn't capture the exact error, but it pointed to that specific value with the constraint of varying(3)
.
Given that I considered this unimportant to what I was doing at the time, I just changed the value to Mer
, it worked, and I moved on.
Is a varying
field with its limit taking into account length of the byte string? What really boggles my mind is that this was dumped from another PostgreSQL database. So it doesn't make sense how a constraint could allow the value to be written initially.
One of the interesting features of PostgreSQL database is the ability to handle Unicode characters. In SQL Server, to store non-English characters, we need to use NVARCHAR or NCAHR data type. In PostgreSQL, the varchar data type itself will store both English and non-English characters.
What is PostgreSQL Varchar datatype? In PostgreSQL, the Varchar data type is used to keep the character of infinite length. And it can hold a string with a maximum length of 65,535 bytes.
They are identical. Tip: There are no performance differences between these three types, apart from increased storage size when using the blank-padded type, and a few extra cycles to check the length when storing into a length-constrained column.
VARCHAR is an alias for CHARACTER VARYING , so no difference, see documentation :) The notations varchar(n) and char(n) are aliases for character varying(n) and character(n) , respectively. character without length specifier is equivalent to character(1) .
The length limit imposed by varchar(N)
types and calculated by the length
function is in characters, not bytes. So 'abcdef'::char(3)
is truncated to 'abc'
but 'a€cdef'::char(3)
is truncated to 'a€c'
, even in the context of a database encoded as UTF-8, where 'a€c'
is encoded using 5 bytes.
If restoring a dump file complained that 'Mér'
would not go into a varchar(3)
column, that suggests you were restoring a UTF-8 encoded dump file into a SQL_ASCII database.
For example, I did this in a UTF-8 database:
create schema so4249745; create table so4249745.t(key varchar(3) primary key); insert into so4249745.t values('Mér');
And then dumped this and tried to load it into a SQL_ASCII database:
pg_dump -f dump.sql --schema=so4249745 --table=t createdb -E SQL_ASCII -T template0 enctest psql -f dump.sql enctest
And sure enough:
psql:dump.sql:34: ERROR: value too long for type character varying(3) CONTEXT: COPY t, line 1, column key: "Mér"
By contrast, if I create the database enctest as encoding LATIN1 or UTF8, it loads fine.
This problem comes about because of a combination of dumping a database with a multi-byte character encoding, and trying to restore it into a SQL_ASCII database. Using SQL_ASCII basically disables the transcoding of client data to server data and assumes one byte per character, leaving it to the clients to take responsibility for using the right character map. Since the dump file contains the stored string as UTF-8, that is four bytes, so a SQL_ASCII database sees that as four characters, and therefore regards it as violating the constraint. And it prints out the value, which my terminal then reassembles as three characters.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With