Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does Java store UTF-16 characters in its 16-bit char type?

Tags:

According to the Java SE 7 Specification, Java uses the Unicode UTF-16 standard to represent characters. When imagining a String as a simple array of 16-bit variables each containing one character, life is simple.

Unfortunately, there are code points for which 16 bits simply aren't enough (I believe it was 16/17th of all Unicode characters). So in a String, this poses no direct problem, because when wanting to store one of these ~1.048.576 characters using an additional two bytes, simply two array positions in that String would be used.

This, without posing any direct problem, works for Strings, because there can always be an additional two bytes. Though when it comes to single variables which, in contrast to the UTF-16 encoding, have a fixed length of 16 bits, how can these characters be stored, and in particular, how does Java do it with its 2-byte "char" type?

like image 341
Kierrow Avatar asked Oct 28 '12 19:10

Kierrow


People also ask

How are chars stored in Java?

A Java char is conventionally used to hold a Unicode code unit; i.e. a 16 bit unit that is part of a valid UTF-16 sequence.

Does Java use UTF-16?

The native character encoding of the Java programming language is UTF-16. A charset in the Java platform therefore defines a mapping between sequences of sixteen-bit UTF-16 code units (that is, sequences of chars) and sequences of bytes.

What data representation does Java use for storing characters?

Internally, Java uses the Unicode character set.

Why does Java use UTF-16?

Because it used to be UCS-2, which was a nice fixed-length 16-bits. Of course, 16bit turned out not to be enough. They retrofitted UTF-16 in on top. Here is a quote from the Unicode FAQ: Originally, Unicode was designed as a pure 16-bit encoding, aimed at representing all modern scripts.


1 Answers

The answer is in the javadoc :

The char data type (and therefore the value that a Character object encapsulates) are based on the original Unicode specification, which defined characters as fixed-width 16-bit entities. The Unicode standard has since been changed to allow for characters whose representation requires more than 16 bits.

The range of legal code points is now U+0000 to U+10FFFF, known as Unicode scalar value. (Refer to the definition of the U+n notation in the Unicode standard.) The set of characters from U+0000 to U+FFFF is sometimes referred to as the Basic Multilingual Plane (BMP). Characters whose code points are greater than U+FFFF are called supplementary characters. The Java 2 platform uses the UTF-16 representation in char arrays and in the String and StringBuffer classes. In this representation, supplementary characters are represented as a pair of char values, the first from the high-surrogates range, (\uD800-\uDBFF), the second from the low-surrogates range (\uDC00-\uDFFF).

A char value, therefore, represents Basic Multilingual Plane (BMP) code points, including the surrogate code points, or code units of the UTF-16 encoding. An int value represents all Unicode code points, including supplementary code points. The lower (least significant) 21 bits of int are used to represent Unicode code points and the upper (most significant) 11 bits must be zero.

Unless otherwise specified, the behavior with respect to supplementary characters and surrogate char values is as follows: The methods that only accept a char value cannot support supplementary characters. They treat char values from the surrogate ranges as undefined characters. For example, Character.isLetter('\uD840') returns false, even though this specific value if followed by any low-surrogate value in a string would represent a letter. The methods that accept an int value support all Unicode characters, including supplementary characters. For example, Character.isLetter(0x2F81A) returns true because the code point value represents a letter (a CJK ideograph). In the Java SE API documentation, Unicode code point is used for character values in the range between U+0000 and U+10FFFF, and Unicode code unit is used for 16-bit char values that are code units of the UTF-16 encoding. For more information on Unicode terminology, refer to the Unicode Glossary.

Simply said :

  • the 16 bits for a char rule was designed for an old version of the Unicode standard
  • you sometimes need two chars to represent a unicode rune (code point) which isn't in the Basic Multilingual Plane. This kindof "works" because you don't frequently use chars, especially to handle unicode runes outside the BMP.

Even simpler said :

  • a java char doesn't represent a Unicode codepoint (well, not always).

As an aside, it can be noted that the evolution of Unicode to extend past the BMP made UTF-16 globally irrelevant, now that UTF-16 doesn't even enable a fixed byte-chars ratio. That's why more modern languages are based on UTF-8. This manifesto helps understand it.

like image 128
Denys Séguret Avatar answered Oct 20 '22 05:10

Denys Séguret