In different encodings of Unicode, for example UTF-16le or UTF-8, a character may occupy 2 or 3 bytes. Many Unicode applications doesn't take care of display width of Unicode chars just like they are all Latin letters. For example, in 80-column text, which should contains 40 Chinese characters or 80 Latin letters in one line, but most application (like Eclipse, Notepad++, and all well-known text editors, I dare if there's any good exception) just count each Chinese character as 1 width as Latin letter. This certainly make the result format ugly and non-aligned.
For example, a tab-width of 8 will get the following ugly result (count all Unicode as 1 display width):
apple 10 banana 7 苹果 6 猕猴桃 31 pear 16
However, the expected format is (Count each Chinese character as 2 width):
apple 10 banana 7 苹果 6 猕猴桃 31 pear 16
The improper calculation on display width of chars make these editors totally useless when doing tab-align, and line wrapping and paragraph reformat.
Though, the width of a character may vary between different fonts, but in all cases of Fixed-size terminal font, Chinese character is always double width. That is to say, in despite of font, each Chinese character is preferred to display in 2 width.
One of solution is, I can get the correct width by convert the encoding to GB2312, in GB2312 encoding each Chinese character takes 2 bytes. however, some Unicode characters doesn't exist in GB2312 charset (or GBK charset). And, in general it's not a good idea to compute the display width from the encoded size in bytes.
To simply calculate all character in Unicode in range of (\u0080
..\uFFFF
) as 2 width is also not correct, because there're also many 1-width chars scattered in the range.
There's also difficult when calculate the display width of Arabic letters and Korean letters, because they construct a word/character by arbitrary number of Unicode code points.
So, the display width of a Unicode code point maybe not an integer, I deem that is ok, they can be grounded to integer in practice, at least better than none.
So, is there any attribute related to the preferred display width of a char in Unicode standard? Or any Java library function to calculate the display width?
Unicode uses two encoding forms: 8-bit and 16-bit, based on the data type of the data that is being that is being encoded. The default encoding form is 16-bit, where each character is 16 bits (2 bytes) wide.
Each character is encoded as 1 to 4 bytes. The first 128 Unicode code points are encoded as 1 byte in UTF-8.
﷽ is probably the most space-consuming character.
The Unicode UTF-8 form encodes characters of variable width between one and three bytes per character. Single-byte-encoded strings use one byte per character. Multi-byte-encoded strings use between one and three bytes per character.
Sounds like you're looking for something like wcwidth
and wcswidth
, defined in IEEE Std 1003.1-2001, but removed from ISO C:
The
wcwidth()
function shall determine the number of column positions required for the wide character wc. Thewcwidth()
function shall either return 0 (if wc is a null wide-character code), or return the number of column positions to be occupied by the wide-character code wc, or return -1 (if wc does not correspond to a printable wide-character code).
Markus Kuhn wrote an open source version, wcwidth.c, based on Unicode 5.0. It includes a description of the problem, and an acknowledgement of the lack of standards in the area:
In fixed-width output devices, Latin characters all occupy a single "cell" position of equal width, whereas ideographic CJK characters occupy two such cells. Interoperability between terminal-line applications and (teletype-style) character terminals using the UTF-8 encoding requires agreement on which character should advance the cursor by how many cell positions. No established formal standards exist at present on which Unicode character shall occupy how many cell positions on character terminals. These routines are a first attempt of defining such behavior based on simple rules applied to data provided by the Unicode Consortium. [...]
It implements the following rules:
You are confusing code points, graphemes and encoding.
The encoding is how code points are converted into an octet stream for storage, transmission or processing. Both UTF-8 and UTF-16 are variable width encodings, with different code points needing a different number of octets (for UTF-8 anything from 1 to, IIRC, 6 and UTF-16 either 2 or 4).
Graphemes are "what we see as a character", these are what are displayed. One code point (e.g. LATIN LOWER CASE A) for one grapheme, but in other cases multiple code points might be needed (e.g. LATIN LOWER CASE A, COMBINING ACUTE and COMBINING UNDERSCORE to get an lower case with acute and underscore as used in Kwakwala). In some cases there is more than one combination of code points to create the same grapheme (e.g. LATIN LOWER CASE A WITH ACUTE and COMBINING UNDERSCORE), this is "normalisation",
I.e. the length of the encoding of a single grapheme will depend on the encoding and normalisation.
The display width of the grapheme will depend on the typeface, style and size independently of the encoding length.
For more information, see Wikipedia on Unicode and Unicode's home. There are also some excellent books, perhaps most notably "Fonts & Encodings" by Yannis Haralambous, O'Reilly.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With