Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does UTF-8 encoding identify single byte and double byte characters?

Recently I've faced an issue regarding character encoding, while I was digging into character set and character encoding this doubt came to my mind.UTF-8 encoding is most popular because of its backward compatibility with ASCII.Since UTF-8 is variable length encoding format, how it differentiates single byte and double byte characters.For example, "Aݔ" is stored as "410754" (Unicode for A is 41 and Unicode for Arabic character is 0754.How encoding identifies 41 is one character and 0754 is another two-byte character?Why it's not considered as 4107 as one double byte character and 54 as a single byte character?

like image 995
Ganesh kumar S R Avatar asked Jun 15 '17 11:06

Ganesh kumar S R


People also ask

How do you identify a double byte character?

Shift Out and Shift In characters are required to identify the double-byte data in a mixed character string. DBCLOB - A double-byte character large object. The maximum length of a DBCLOB is 1,073,741,823 DBCS characters.

How does UTF-8 represent different types of characters?

UTF-8 encodes a character into a binary string of one, two, three, or four bytes. UTF-16 encodes a Unicode character into a string of either two or four bytes. This distinction is evident from their names. In UTF-8, the smallest binary representation of a character is one byte, or eight bits.

What is single-byte and double-byte characters?

Alphabetic characters, numeric characters, and symbols are called single-byte characters. (Examples) Kumamoto University, TAKARAJIMA, 2003, #, * On the other hand, Japanese characters are twice as wide as normal alphabetic characters and are called double-byte characters.

How many bytes does UTF-8 code encode each character?

UTF-8 is based on 8-bit code units. Each character is encoded as 1 to 4 bytes. The first 128 Unicode code points are encoded as 1 byte in UTF-8.


1 Answers

For example, "Aݔ" is stored as "410754"

That’s not how UTF-8 works.

Characters U+0000 through U+007F (aka ASCII) are stored as single bytes. They are the only characters whose codepoints numerically match their UTF-8 presentation. For example, U+0041 becomes 0x41 which is 01000001 in binary.

All other characters are represented with multiple bytes. U+0080 through U+07FF use two bytes each, U+0800 through U+FFFF use three bytes each, and U+10000 through U+10FFFF use four bytes each.

Computers know where one character ends and the next one starts because UTF-8 was designed so that the single-byte values used for ASCII do not overlap with those used in multi-byte sequences. The bytes 0x00 through 0x7F are only used for ASCII and nothing else; the bytes above 0x7F are only used for multi-byte sequences and nothing else. Furthermore, the bytes that are used at the beginning of the multi-byte sequences also cannot occur in any other position in those sequences.

Because of that the codepoints need to be encoded. Consider the following binary patterns:

  • 2 bytes: 110xxxxx 10xxxxxx
  • 3 bytes: 1110xxxx 10xxxxxx 10xxxxxx
  • 4 bytes: 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx

The amount of ones in the first byte tells you how many of the following bytes still belong to the same character. All bytes that belong to the sequence start with 10 in binary. To encode the character you convert its codepoint to binary and fill in the x’s.

As an example: U+0754 is between U+0080 and U+07FF, so it needs two bytes. 0x0754 in binary is 11101010100, so you replace the x’s with those digits:

11011101 10010100

like image 187
CharlotteBuff Avatar answered Oct 04 '22 02:10

CharlotteBuff