Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the SQL national character (NCHAR) datatype really for?

As well as CHAR (CHARACTER) and VARCHAR (CHARACTER VARYING), SQL offers an NCHAR (NATIONAL CHARACTER) and NVARCHAR (NATIONAL CHARACTER VARYING) type. In some databases, this is the better datatype to use for character (non-binary) strings:

  • In SQL Server, NCHAR is stored as UTF-16LE and is the only way to reliably store non-ASCII characters, CHAR being a single-byte codepage only;

  • In Oracle, NVARCHAR may be stored as UTF-16 or UTF-8 rather than a single-byte collation;

  • But in MySQL, NVARCHAR is VARCHAR, so it makes no difference, either type can be stored with UTF-8 or any other collation.

So, what does NATIONAL actually conceptually mean, if anything? The vendors' docs only tell you about what character sets their own DBMSs use, rather than the actual rationale. Meanwhile the SQL92 standard explains the feature even less helpfully, stating only that NATIONAL CHARACTER is stored in an implementation-defined character set. As opposed to a mere CHARACTER, which is stored in an implementation-defined character set. Which might be a different implementation-defined character set. Or not.

Thanks, ANSI. Thansi.

Should one use NVARCHAR for all character (non-binary) storage purposes? Are there currently-popular DBMSs in which it will do something undesirable, or which just don't recognise the keyword (or N'' literals)?

like image 874
bobince Avatar asked Oct 09 '10 02:10

bobince


People also ask

What is Nchar datatype in SQL?

The NCHAR data type stores fixed-length character data. The data can be a string of single-byte or multibyte letters, digits, and other symbols that are supported by the code set of the database locale. The main difference between CHAR and NCHAR data types is the collating order.

What is Nchar used for?

The Oracle NCHAR datatype is used to store fixed-length Unicode character data. The character set of NCHAR can only be AL16UTF16 or UTF8 , which is specified at the database creation time as the national character set. The maximum byte length of a NCHAR column depends on the current national character set.

What is data type Nchar?

The NCHAR data type is a fixed-length character data type that supports localized collation. The NVARCHAR data type is a varying-length character data type that can store up to 255 bytes of text data and supports localized collation.

What is SQL national character?

A national character string is a sequence of bytes that represents character data in UTF-8 or UTF-16BE encoding in a Unicode database. The length of the string is the number of code units in the sequence. If the length is zero, the value is called the empty string. This value should not be confused with the null value.


3 Answers

"NATIONAL" in this case means characters specific to different nationalities. Far east languages especially have so many characters that one byte is not enough space to distinguish them all. So if you have an english(ascii)-only app or an english-only field, you can get away using the older CHAR and VARCHAR types, which only allow one byte per character.

That said, most of the time you should use NCHAR/NVARCHAR. Even if you don't think you need to support (or potentially support) multiple languages in your data, even english-only apps need to be able to sensibly handle security attacks using foreign-language characters.

In my opinion, about the only place where the older CHAR/VARCHAR types are still preferred is for frequently-referenced ascii-only internal codes and data on platforms like Sql Server that support the distinction — data that would be the equivalent of an enum in a client language like C++ or C#.

like image 94
Joel Coehoorn Avatar answered Oct 18 '22 19:10

Joel Coehoorn


Meanwhile the SQL92 standard explains the feature even less helpfully, stating only that NATIONAL CHARACTER is stored in an implementation-defined character set. As opposed to a mere CHARACTER, which is stored in an implementation-defined character set. Which might be a different implementation-defined character set. Or not.

Coincidentally, this is the same "distinction" the C++ standard makes between char and wchar_t. A relic of the Dark Ages of Character Encoding when every language/OS combination has its own character set.

Should one use NVARCHAR for all character (non-binary) storage purposes?

It is not important whether the declared type of your column is VARCHAR or NVARCHAR. But it is important to use Unicode (whether UTF-8, UTF-16, or UTF-32) for all character storage purposes.

Are there currently-popular DBMSs in which it will do something undesirable

Yes: In MS SQL Server, using NCHAR makes your (English) data take up twice as much space. Unfortunately, UTF-8 isn't supported yet.

EDIT: SQL Server 2019 finally introduced UTF-8 support.

like image 5
dan04 Avatar answered Oct 18 '22 19:10

dan04


In Oracle, the database character set can be a multi-byte character set, so you can store all manner of characters in there....but you need to understand and define the length of the columns appropriately (in either BYTES or CHARACTERS).

NVARCHAR gives you the option to have a database character set that is a single-byte (which reduces the potential for confusion between BYTE or CHARACTER sized columns) and use NVARCHAR as the multi-byte. See here.

Since I predominantly work with English data, I'd go with a multi-byte character set (UTF-8 mostly) as the database character set and ignore NVARCHAR. If I inherited an old database which was in a single-byte characterset and was too big to convert, I may use NVARCHAR. But I'd prefer not to.

like image 3
Gary Myers Avatar answered Oct 18 '22 18:10

Gary Myers