I have learnt from PEP 3131 that non-ASCII identifiers were supported in Python, though it's not considered best practice.
However, I get this strange behaviour, where my π identifier (U+1D70F) seems to be automatically converted to Ο (U+03C4).
class Base(object):
    def __init__(self):
        self.π = 5 # defined with U+1D70F
a = Base()
print(a.π)     # 5             # (U+1D70F)
print(a.Ο)     # 5 as well     # (U+03C4) ? another way to access it?
d = a.__dict__ # {'Ο':  5}     # (U+03C4) ? seems converted
print(d['Ο'])  # 5             # (U+03C4) ? consistent with the conversion
print(d['π'])  # KeyError: 'π' # (U+1D70F) ?! unexpected!
Is that expected behaviour? Why does this silent conversion occur? Does it have anything to see with NFKC normalization? I thought this was only for canonically ordering Unicode character sequences...
Per the documentation on identifiers:
All identifiers are converted into the normal form NFKC while parsing; comparison of identifiers is based on NFKC.
You can see that U+03C4 is the appropriate result using unicodedata:
>>> import unicodedata
>>> unicodedata.normalize('NFKC', 'π')
'Ο'
However, this conversion doesn't apply to string literals, like the one you're using as a dictionary key, hence it's looking for the unconverted character in a dictionary that only contains the converted character.
self.π = 5  # implicitly converted to "self.Ο = 5"
a.π  # implicitly converted to "a.Ο"
d['π']  # not converted
You can see similar problems with e.g. string literals used with getattr:
>>> getattr(a, 'π')
Traceback (most recent call last):
  File "python", line 1, in <module>
AttributeError: 'Base' object has no attribute 'π'
>>> getattr(a, unicodedata.normalize('NFKD', 'π'))
5
                        If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With