Is there a standard way, in Python, to normalize a unicode string, so that it only comprehends the simplest unicode entities that can be used to represent it ?
I mean, something which would translate a sequence like ['LATIN SMALL LETTER A', 'COMBINING ACUTE ACCENT']
to ['LATIN SMALL LETTER A WITH ACUTE']
?
See where is the problem:
>>> import unicodedata >>> char = "á" >>> len(char) 1 >>> [ unicodedata.name(c) for c in char ] ['LATIN SMALL LETTER A WITH ACUTE']
But now:
>>> char = "á" >>> len(char) 2 >>> [ unicodedata.name(c) for c in char ] ['LATIN SMALL LETTER A', 'COMBINING ACUTE ACCENT']
I could, of course, iterate over all the chars and do manual replacements, etc., but it is not efficient, and I'm pretty sure I would miss half of the special cases, and do mistakes.
Essentially, the Unicode Normalization Algorithm puts all combining marks in a specified order, and uses rules for decomposition and composition to transform each string into one of the Unicode Normalization Forms. A binary comparison of the transformed strings will then determine equivalence.
Unicode normalization is our solution to both canonical and compatibility equivalence issues. In normalization, there are two directions and two types of conversions we can make. The two types we have already covered, canonical and compatibility.
Character normalization is a process that can improve recall. Improving recall by character normalization means that more documents are retrieved even if the documents do not exactly match the query.
Normalization Form Canonical Composition. Characters are decomposed and then recomposed by canonical equivalence. NFKD. Normalization Form Compatibility Decomposition. Characters are decomposed by compatibility, and multiple combining characters are arranged in a specific order.
The unicodedata
module offers a .normalize()
function, you want to normalize to the NFC form. An example using the same U+0061 LATIN SMALL LETTER
- U+0301 A COMBINING ACUTE ACCENT
combination and U+00E1 LATIN SMALL LETTER A WITH ACUTE
code points you used:
>>> print(ascii(unicodedata.normalize('NFC', '\u0061\u0301'))) '\xe1' >>> print(ascii(unicodedata.normalize('NFD', '\u00e1'))) 'a\u0301'
(I used the ascii()
function here to ensure non-ASCII codepoints are printed using escape syntax, making the differences clear).
NFC, or 'Normal Form Composed' returns composed characters, NFD, 'Normal Form Decomposed' gives you decomposed, combined characters.
The additional NFKC and NFKD forms deal with compatibility codepoints; e.g. U+2160 ROMAN NUMERAL ONE
is really just the same thing as U+0049 LATIN CAPITAL LETTER I
but present in the Unicode standard to remain compatible with encodings that treat them separately. Using either NFKC or NFKD form, in addition to composing or decomposing characters, will also replace all 'compatibility' characters with their canonical form.
Here is an example using the U+2167 ROMAN NUMERAL EIGHT
codepoint; using the NFKC form replaces this with a sequence of ASCII V
and I
characters:
>>> unicodedata.normalize('NFC', '\u2167') 'Ⅷ' >>> unicodedata.normalize('NFKC', '\u2167') 'VIII'
Note that there is no guarantee that composed and decomposed forms are commutative; normalizing a combined character to NFC form, then converting the result back to NFD form does not always result in the same character sequence. The Unicode standard maintains a list of exceptions; characters on this list are composable, but not decomposable back to their combined form, for various reasons. Also see the documentation on the Composition Exclusion Table.
Yes, there is.
unicodedata.normalize(form, unistr)
You need to select one of the four normalization forms.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With