Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Understanding Text Encoding (In .Net)

Tags:

I have done very little with encoding of Text. Truthfully, I don't really even know what it means exactly.

For example, if I have something like:

Dim myStr as String = "Hello"

Is that 'encoded' in memory in a particular format? Does that format depend on what language I'm using?

If I were in another country, like China, for example, and I had a string of Chinese (mandarin? My apologies if I'm using the wrong words here) would the following code (that I've used fine on English strings) still work the same?

System.Text.UTF8Encoding encoding=new System.Text.UTF8Encoding();
return encoding.GetBytes(str);

Or would it lose all meaning when you convert that .Net string to a UTF8Encoding when that conversion isn't valid?

Finally, I've worked with .Net for a few years now and I've never seen, heard, or had to do anything with Encoding. Am I the exception, or is it not a common thing to do?

like image 962
Rob P. Avatar asked May 03 '11 01:05

Rob P.


People also ask

What encoding does .NET use?

NET uses UTF-16 encoding (represented by the UnicodeEncoding class) for string instances. Encoders and decoders are available for other encoding schemes. Encoding and decoding can also include validation.

What encoding does C# use for strings?

Essentially, string uses the UTF-16 character encoding form.

What is encoding in asp net?

If you set the file encoding, all ASP pages must use that encoding. Notepad.exe can save files that are encoded in the current system ANSI codepage, in UTF-8, or in UTF-16 (also called Unicode). The ASP.NET runtime can distinguish between these three encodings.

What is a text encoding system?

Encoding is the process of transforming a set of Unicode characters into a sequence of bytes. In contrast, decoding is the process of transforming a sequence of encoded bytes into a set of Unicode characters.


2 Answers

The .NET string class is encoding strings using UTF16 - that means 2 bytes per character (although it allows for special combinations of two characters to form a single 4 byte character, so called "surrogate pairs") .

UTF8 on the other hand will use a variable number of bytes necessary to represent a particular Unicode character, i.e. only one byte for regular ASCII characters, but maybe 3 bytes for a Chinese character. Both encodings allow representing all Unicode characters, so there is always a mapping between them - both are different binary represenations (i.e for storing in memory or on disk) of the same (unicode) character set.

Since not all Unicode characters were able to fit into the original 2 bytes reserved by UTF-16, the format also allows to denote a combination of two UTF-16 characters to form 4 byte characters - the so formed character is called a "surrogate" or surrogate pair and is a pair of 16-bit Unicode encoding values that, together, represent a single character.

UTF-8 does not have this problem, since the number of bytes per Unicode character is not fixed. A good general overview over UTF-8, UTF-16 and BOMs can be gathered here.

An excellent overview / introduction to Unicode character encoding is The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets

like image 101
BrokenGlass Avatar answered Sep 21 '22 23:09

BrokenGlass


First and foremeost: do not despair, you are not alone. Awareness of the treatment of character encoding and text representation in general is an unfortunately uncommon thing, but there is no better time to start learning than right now!

In modern systems, including .NET, text strings are represented in memory by some encoding of Unicode code points. These are just numbers. The code point for the character A is 65. The code point for the copyright (c) is 169. The code point for the Thai digit six is 3670.

The term "encoding" refers to how these numbers are represented in memory. There are a number of standard encodings that are used so that textual representation can remain consistent as data is transmitted from one system to another.

A simple encoding standard is UCS-2, whereby the code point is stored in the raw as a 16-bit word. This is limited due to the fact that it can only represent code points 0000-FFFF and such a range does not cover the full breadth of Unicode code points.

UTF-16 is the encoding used internally by the .NET String class. Most characters fit into a single 16-bit word here, but values larger than FFFF are encoded using surrogate pairs (see the Wiki). Because of this encoding scheme, code points D800-DFFF cannot be enocded by UTF-16.

UTF-8 is perhaps the most popular encoding used today, for a number of reasons which are outlined in the Wiki article.

like image 45
kqnr Avatar answered Sep 23 '22 23:09

kqnr