Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using unicode characters bigger than 2 bytes with .Net

I'm using this code to generate U+10FFFC

var s = Encoding.UTF8.GetString(new byte[] {0xF4,0x8F,0xBF,0xBC});

I know it's for private-use and such, but it does display a single character as I'd expect when displaying it. The problems come when manipulating this unicode character.

If I later do this:

foreach(var ch in s)
{
    Console.WriteLine(ch);
}

Instead of it printing just the single character, it prints two characters (i.e. the string is apparently composed of two characters). If I alter my loop to add these characters back to an empty string like so:

string tmp="";
foreach(var ch in s)
{
    Console.WriteLine(ch);
    tmp += ch;
}

At the end of this, tmp will print just a single character.

What exactly is going on here? I thought that char contains a single unicode character and I never had to worry about how many bytes a character is unless I'm doing conversion to bytes. My real use case is I need to be able to detect when very large unicode characters are used in a string. Currently I have something like this:

foreach(var ch in s)
{
    if(ch>=0x100000 && ch<=0x10FFFF)
    {
        Console.WriteLine("special character!");
    }
}

However, because of this splitting of very large characters, this doesn't work. How can I modify this to make it work?

like image 549
Earlz Avatar asked May 29 '13 14:05

Earlz


People also ask

Does Unicode always have 2 bytes?

Unicode does not mean 2 bytes. Unicode defines code points that can be stored in many different ways (UCS-2, UTF-8, UTF-7, etc.). Encodings vary in simplicity and efficiency. Unicode has more than 65,535 (16 bits) worth of characters.

How many bytes is a Unicode character?

Unicode uses two encoding forms: 8-bit and 16-bit, based on the data type of the data that is being that is being encoded. The default encoding form is 16-bit, where each character is 16 bits (2 bytes) wide. Sixteen-bit encoding form is usually shown as U+hhhh, where hhhh is the hexadecimal code point of the character.

Is char 2 bytes C#?

integer size is: 4bytes and char size is only 2bytes in c# .

Why does char have double size than byte in C#?

A char is unicode in C#, therefore the number of possible characters exceeds 255. So you'll need two bytes. Extended ASCII for example has a 255-char set, and can therefore be stored in one single byte.


2 Answers

U+10FFFC is one Unicode code point, but string's interface does not expose a sequence of Unicode code points directly. Its interface exposes a sequence of UTF-16 code units. That is a very low-level view of text. It is quite unfortunate that such a low-level view of text was grafted onto the most obvious and intuitive interface available... I'll try not to rant much about how I don't like this design, and just say that not matter how unfortunate, it is just a (sad) fact you have to live with.

First off, I will suggest using char.ConvertFromUtf32 to get your initial string. Much simpler, much more readable:

var s = char.ConvertFromUtf32(0x10FFFC);

So, this string's Length is not 1, because, as I said, the interface deals in UTF-16 code units, not Unicode code points. U+10FFFC uses two UTF-16 code units, so s.Length is 2. All code points above U+FFFF require two UTF-16 code units for their representation.

You should note that ConvertFromUtf32 doesn't return a char: char is a UTF-16 code unit, not a Unicode code point. To be able to return all Unicode code points, that method cannot return a single char. Sometimes it needs to return two, and that's why it makes it a string. Sometimes you will find some APIs dealing in ints instead of char because int can be used to handle all code points too (that's what ConvertFromUtf32 takes as argument, and what ConvertToUtf32 produces as result).

string implements IEnumerable<char>, which means that when you iterate over a string you get one UTF-16 code unit per iteration. That's why iterating your string and printing it out yields some broken output with two "things" in it. Those are the two UTF-16 code units that make up the representation of U+10FFFC. They are called "surrogates". The first one is a high/lead surrogate and the second one is a low/trail surrogate. When you print them individually they do not produce meaningful output because lone surrogates are not even valid in UTF-16, and they are not considered Unicode characters either.

When you append those two surrogates to the string in the loop, you effectively reconstruct the surrogate pair, and printing that pair later as one gets you the right output.

And in the ranting front, note how nothing complains that you used a malformed UTF-16 sequence in that loop. It creates a string with a lone surrogate, and yet everything carries on as if nothing happened: the string type is not even the type of well-formed UTF-16 code unit sequences, but the type of any UTF-16 code unit sequence.

The char structure provides static methods to deal with surrogates: IsHighSurrogate, IsLowSurrogate, IsSurrogatePair, ConvertToUtf32, and ConvertFromUtf32. If you want you can write an iterator that iterates over Unicode characters instead of UTF-16 code units:

static IEnumerable<int> AsCodePoints(this string s)
{
    for(int i = 0; i < s.Length; ++i)
    {
        yield return char.ConvertToUtf32(s, i);
        if(char.IsHighSurrogate(s, i))
            i++;
    }
}

Then you can iterate like:

foreach(int codePoint in s.AsCodePoints())
{
     // do stuff. codePoint will be an int will value 0x10FFFC in your example
}

If you prefer to get each code point as a string instead change the return type to IEnumerable<string> and the yield line to:

yield return char.ConvertFromUtf32(char.ConvertToUtf32(s, i));

With that version, the following works as-is:

foreach(string codePoint in s.AsCodePoints())
{
     Console.WriteLine(codePoint);
}
like image 87
R. Martinho Fernandes Avatar answered Oct 29 '22 16:10

R. Martinho Fernandes


As posted already by Martinho, it is much easier to create the string with this private codepoint that way:

var s = char.ConvertFromUtf32(0x10FFFC);

But to loop through the two char elements of that string is senseless:

foreach(var ch in s)
{
    Console.WriteLine(ch);
}

What for? You will just get the high and low surrogate that encode the codepoint. Remember a char is a 16 bit type so it can hold just a max value of 0xFFFF. Your codepoint doesn't fit into a 16 bit type, indeed for the highest codepoint you'll need 21 bits (0x10FFFF) so the next wider type would just be a 32 bit type. The two char elements are not characters, but a surrogate pair. The value of 0x10FFFC is encoded into the two surrogates.

like image 24
brighty Avatar answered Oct 29 '22 16:10

brighty