Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is UTF8 injective mapping?

We write a C++ application and need to know this:

Is UTF8 text encoding an injective mapping from bytes to characters, meaning that every single character (letter...) is encoded in only one way? So, e.g. letter 'Ž' cannot be encoded as, say, both 3231 and 32119.

like image 869
Cartesius00 Avatar asked Nov 13 '11 20:11

Cartesius00


People also ask

What is the difference between UTF-8 and UTF-8?

There is no difference between "utf8" and "utf-8"; they are simply two names for UTF8, the most common Unicode encoding.

What is UTF-8 mode?

Unicode Transformation Format (UTF-8) encoding is a variable length character encoding for Unicode. It can represent any character in the Unicode standard.

Should you use UTF-8 with BOM?

Use of a BOM is neither required nor recommended for UTF-8, but may be encountered in contexts where UTF-8 data is converted from other encoding forms that use a BOM or where the BOM is used as a UTF-8 signature. See the “Byte Order Mark” subsection in Section 16.8, Specials, for more information.

What is UTF-8 without BOM?

The UTF-8 encoding without a BOM has the property that a document which contains only characters from the US-ASCII range is encoded byte-for-byte the same way as the same document encoded using the US-ASCII encoding. Such a document can be processed and understood when encoded either as UTF-8 or as US-ASCII.


3 Answers

That depends very much on what you consider a "letter".

UTF8 is basically a tiny piece of what is Unicode.

Basically there are at least three levels: Bytes, Code points and Grapheme clusters. A Code point can be encoded in one or more bytes, according to a certain encoding, like UTF8, UTF16 or UTF32. This encoding is unique (because all alternative ways are declared invalid). However a code point is not always a glyph because there are so-called combining characters. Such combining characters follow the base character and, as their name says, are combined with the base character. For example, there's the combining character U+0308 COMBINING DIAERESIS which puts a diaeresis (¨) above the preceding letter. So if it follows e.g. an a (U+0061 LATIN SMALL LETTER A), the result is an ä. However there's also a single code point for the letter ä (U+00E4 LATIN SMALL LETTER A WITH DIAERESIS), so this means that the code sequences U+0061 U+0308 and U+00E4 describe the same letter.

So, each code point has a single valid UTF 8 encoding (e.g. U+0061 is "\141", U+0308 is "\314\210" and U+00e4 is "\303\244", but the letter ä is encoded by both the code point sequence U+0061 U+0308, i.e. in UTF8 the byte sequence "\141\314\210" and the single code point U+00E4, i.e. the byte sequence "\303\244".

What's worse is that since the Unicode makers decided that the combining letters follow the base letter instead of preceding it, you cannot know whether your glyph is complete until you've seen the next code point (if it is not a combining code point, your letter is finished).

like image 90
celtschk Avatar answered Oct 04 '22 00:10

celtschk


Valid UTF-8 indeed encodes each character uniquely. However, there are so-called overlong sequences which conform to the general encoding scheme, but are invalid by definition as only the shortest sequence may be used to encode a character.

For example, there's a derivative of UTF-8 called modified UTF-8 which encodes NUL as the overlong sequence 0xC0 0x80 instead of 0x00 to get an encoding compatible with null-terminated strings.

If you're asking about grapheme clusters (ie user-perceived characters) instead of characters, then even valid UTF-8 is ambiguous. However, Unicode defines several different normalization forms, and if you restrict yourself to normalized strings, then UTF-8 is indeed injective.

Somewhat off-topic: Here's some ASCII art I came up with to help visualize the different concepts of character. Vertically separated are the human, abstract and machine level. Feel free to come up with better names...

                         [user-perceived characters]<-+
                                      ^               |
                                      |               |
                                      v               |
            [characters] <-> [grapheme clusters]      |
                 ^                    ^               |
                 |                    |               |
                 v                    v               |
[bytes] <-> [codepoints]           [glyphs]<----------+

To get back on topic: This graph also shows where the possible problems may crop up when using bytes to compare abstract strings. In particular (assuming UTF-8), the programmer needs to make sure that

  • the byte sequence is valid, ie doesn't contain overlong sequences or encode non-character codepoints
  • the character sequence is normalized so equivalent grapheme clusters have a unique representation
like image 43
Christoph Avatar answered Oct 04 '22 00:10

Christoph


First you need some terminology:

  • Letter: (abstract concept, not in Unicode) some letter or symbol you want to represent.
  • Codepoint: a number associated to an Unicode character.
  • Grapheme cluster: a sequence of Unicode codepoints that correspond to a single letter, e.g: a + ́ for the letter á.
  • Glyph: (concept at the level of fonts, not in Unicode): a graphical representation of a letter.

Each codepoint (e.g: U+1F4A9) gets a unique representation as bytes in UTF-8 (e.g: 0xF0 0x9F 0x92 0xA9).

Some letters can be represented in several different ways as codepoints (i.e: as different grapheme clusters). e.g: á can be represented as a single codepoint á (LATIN SMALL LETTER A WITH ACUTE), or it can be represented as the codepoint for a (LATIN SMALL LETTER A) + the codepoint for ́ (COMBINING ACUTE ACCENT). Unicode has several canonical normalization forms to deal with this (e.g: NFC or Canonical Normalization Form C is loosely a normalization form with fewer codepoints, while NFD is fully decomposed).

And then, there are also ligatures (e.g: ) and some other presentation-related variations of a letter (e.g: superscripts, no-break spaces, letters with different shapes at different places of a word, ...). Some of these are in Unicode to permit lossless roundtrip conversion from-to legacy character sets. Unicode has compatibility normalization forms (NFKC and NFKD) to deal with this.

like image 41
ninjalj Avatar answered Oct 04 '22 00:10

ninjalj