We write a C++
application and need to know this:
Is UTF8
text encoding an injective mapping from bytes to characters, meaning that every single character (letter...) is encoded in only one way? So, e.g. letter 'Ž' cannot be encoded as, say, both 3231 and 32119.
There is no difference between "utf8" and "utf-8"; they are simply two names for UTF8, the most common Unicode encoding.
Unicode Transformation Format (UTF-8) encoding is a variable length character encoding for Unicode. It can represent any character in the Unicode standard.
Use of a BOM is neither required nor recommended for UTF-8, but may be encountered in contexts where UTF-8 data is converted from other encoding forms that use a BOM or where the BOM is used as a UTF-8 signature. See the “Byte Order Mark” subsection in Section 16.8, Specials, for more information.
The UTF-8 encoding without a BOM has the property that a document which contains only characters from the US-ASCII range is encoded byte-for-byte the same way as the same document encoded using the US-ASCII encoding. Such a document can be processed and understood when encoded either as UTF-8 or as US-ASCII.
That depends very much on what you consider a "letter".
UTF8 is basically a tiny piece of what is Unicode.
Basically there are at least three levels: Bytes, Code points and Grapheme clusters. A Code point can be encoded in one or more bytes, according to a certain encoding, like UTF8, UTF16 or UTF32. This encoding is unique (because all alternative ways are declared invalid). However a code point is not always a glyph because there are so-called combining characters. Such combining characters follow the base character and, as their name says, are combined with the base character. For example, there's the combining character U+0308 COMBINING DIAERESIS which puts a diaeresis (¨) above the preceding letter. So if it follows e.g. an a (U+0061 LATIN SMALL LETTER A), the result is an ä. However there's also a single code point for the letter ä (U+00E4 LATIN SMALL LETTER A WITH DIAERESIS), so this means that the code sequences U+0061 U+0308 and U+00E4 describe the same letter.
So, each code point has a single valid UTF 8 encoding (e.g. U+0061 is "\141", U+0308 is "\314\210" and U+00e4 is "\303\244", but the letter ä is encoded by both the code point sequence U+0061 U+0308, i.e. in UTF8 the byte sequence "\141\314\210" and the single code point U+00E4, i.e. the byte sequence "\303\244".
What's worse is that since the Unicode makers decided that the combining letters follow the base letter instead of preceding it, you cannot know whether your glyph is complete until you've seen the next code point (if it is not a combining code point, your letter is finished).
Valid UTF-8 indeed encodes each character uniquely. However, there are so-called overlong sequences which conform to the general encoding scheme, but are invalid by definition as only the shortest sequence may be used to encode a character.
For example, there's a derivative of UTF-8 called modified UTF-8 which encodes NUL as the overlong sequence 0xC0 0x80
instead of 0x00
to get an encoding compatible with null-terminated strings.
If you're asking about grapheme clusters (ie user-perceived characters) instead of characters, then even valid UTF-8 is ambiguous. However, Unicode defines several different normalization forms, and if you restrict yourself to normalized strings, then UTF-8 is indeed injective.
Somewhat off-topic: Here's some ASCII art I came up with to help visualize the different concepts of character. Vertically separated are the human, abstract and machine level. Feel free to come up with better names...
[user-perceived characters]<-+
^ |
| |
v |
[characters] <-> [grapheme clusters] |
^ ^ |
| | |
v v |
[bytes] <-> [codepoints] [glyphs]<----------+
To get back on topic: This graph also shows where the possible problems may crop up when using bytes to compare abstract strings. In particular (assuming UTF-8), the programmer needs to make sure that
First you need some terminology:
a
+ ́
for the letter á
.Each codepoint (e.g: U+1F4A9) gets a unique representation as bytes in UTF-8 (e.g: 0xF0 0x9F 0x92 0xA9).
Some letters can be represented in several different ways as codepoints (i.e: as different grapheme clusters). e.g: á
can be represented as a single codepoint á
(LATIN SMALL LETTER A WITH ACUTE), or it can be represented as the codepoint for a
(LATIN SMALL LETTER A) + the codepoint for ́
(COMBINING ACUTE ACCENT). Unicode has several canonical normalization forms to deal with this (e.g: NFC or Canonical Normalization Form C is loosely a normalization form with fewer codepoints, while NFD is fully decomposed).
And then, there are also ligatures (e.g: fi
) and some other presentation-related variations of a letter (e.g: superscripts, no-break spaces, letters with different shapes at different places of a word, ...). Some of these are in Unicode to permit lossless roundtrip conversion from-to legacy character sets. Unicode has compatibility normalization forms (NFKC and NFKD) to deal with this.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With