Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Flex(lexer) support for unicode

I am wondering if the newest version of flex supports unicode?

If so, how can use patterns to match Chinese characters?

More: Use regular expression to match ANY Chinese character in utf-8 encoding

like image 943
xiaohan2012 Avatar asked Mar 08 '12 01:03

xiaohan2012


People also ask

What are Unicode encoding schemes?

Unicode uses two encoding forms: 8-bit and 16-bit, based on the data type of the data that is being that is being encoded. The default encoding form is 16-bit, where each character is 16 bits (2 bytes) wide. Sixteen-bit encoding form is usually shown as U+hhhh, where hhhh is the hexadecimal code point of the character.

What is a Unicode string?

Unicode is a standard encoding system that is used to represent characters from almost all languages. Every Unicode character is encoded using a unique integer code point between 0 and 0x10FFFF . A Unicode string is a sequence of zero or more code points.

What is meant by Unicode Class 11?

Unicode is a universal character encoding standard. This standard includes roughly 100000 characters to represent characters of different languages. While ASCII uses only 1 byte the Unicode uses 4 bytes to represent characters. Hence, it provides a very wide variety of encoding.

What is Unicode in computer architecture?

Unicode, formally The Unicode Standard, is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems.


2 Answers

At the moment, flex only generates 8-bit scanners which basically limits you to use UTF-8. So if you have a pattern:

肖晗   { printf ("xiaohan\n"); } 

it will work as expected, as the sequence of bytes in the pattern and in the input will be the same. What's more difficult is character classes. If you want to match either the character 肖 or 晗, you can't write:

[肖晗]   { printf ("xiaohan/2\n"); } 

because this will match each of the six bytes 0xe8, 0x82, 0x96, 0xe6, 0x99 and 0x97, which in practice means that if you supply 肖晗 as the input, the pattern will match six times. So in this simple case, you have to rewrite the pattern to (肖|晗).

For ranges, Hans Aberg has written a tool in Haskell that transforms these into 8-bit patterns:

Unicode> urToRegU8 0 0xFFFF [\0-\x7F]|[\xC2-\xDF][\x80-\xBF]|(\xE0[\xA0-\xBF]|[\xE1-\xEF][\x80-\xBF])[\x80-\xBF] Unicode> urToRegU32 0x00010000 0x001FFFFF \0[\x01-\x1F][\0-\xFF][\0-\xFF] Unicode> urToRegU32L 0x00010000 0x001FFFFF [\x01-\x1F][\0-\xFF][\0-\xFF]\0 

This isn't pretty, but it should work.

like image 177
Tim Landscheidt Avatar answered Sep 22 '22 19:09

Tim Landscheidt


Flex does not support Unicode. However, Flex supports "8 bit clean" binary input. Therefore you can write lexical patterns which match UTF-8. You can use these patterns in specific lexical areas of the input language, for instance identifiers, comments or string literals.

This will work for well for typical programming languages, where you may be able to assert to the users of your implementation that the source language is written in ASCII/UTF-8 (and no other encoding is supported, period).

This approach won't work if your scanner must process text that can be in any encoding. It also won't work (very well) if you need to express lexical rules specifically for Unicode elements. I.e. you need Unicode characters and Unicode regexes in the scanner itself.

The idea is that you can recognize a pattern which includes UTF-8 bytes using a lex rule, (and then perhaps take the yytext, and convert it out of UTF-8 or at least validate it.)

For a working example, see the source code of the TXR language, in particular this file: http://www.kylheku.com/cgit/txr/tree/parser.l

Scroll down to this section:

ASC     [\x00-\x7f] ASCN    [\x00-\t\v-\x7f] U       [\x80-\xbf] U2      [\xc2-\xdf] U3      [\xe0-\xef] U4      [\xf0-\xf4]  UANY    {ASC}|{U2}{U}|{U3}{U}{U}|{U4}{U}{U}{U} UANYN   {ASCN}|{U2}{U}|{U3}{U}{U}|{U4}{U}{U}{U}  UONLY   {U2}{U}|{U3}{U}{U}|{U4}{U}{U}{U} 

As you can see, we can define patterns to match ASCII characters as well as UTF-8 start and continuation bytes. UTF-8 is a lexical notation, and this is a lexical analyzer generator, so ... no problem!

Some explanations: The UANY means match any character, single-byte ASCII or multi-byte UTF-8. UANYN means like UANY but no not match the newline. This is useful for tokens that do not break across lines, like say a comment from # to the end of the line, containing international text. UONLY means match only a UTF-8 extended character, not an ASCII one. This is useful for writing a lex rule which needs to exclude certain specific ASCII characters (not just newline) but all extended characters are okay.

DISCLAIMER: Note that the scanner's rules use a function called utf8_dup_from to convert the yytext to wide character strings containing Unicode codepoints. That function is robust; it detects problems like overlong sequences and invalid bytes and properly handles them. I.e. this program is not relying on these lex rules to do the validation and conversion, just to do the basic lexical recognition. These rules will recognize an overlong form (like an ASCII code encoded using several bytes) as valid syntax, but the conversion function will treat them properly. In any case, I don't expect UTF-8 related security issues in the program source code, since you have to trust source code to be running it anyway (but data handled by the program may not be trusted!) If you're writing a scanner for untrusted UTF-8 data, take care!

like image 34
Kaz Avatar answered Sep 22 '22 19:09

Kaz