Let's say I have a string in Python:
>>> s = 'python'
>>> len(s)
6
Now I encode
this string like this:
>>> b = s.encode('utf-8')
>>> b16 = s.encode('utf-16')
>>> b32 = s.encode('utf-32')
What I get from above operations is a bytes array -- that is, b
, b16
and b32
are just arrays of bytes (each byte being 8-bit long of course).
But we encoded the string. So, what does this mean? How do we attach the notion of "encoding" with the raw array of bytes?
The answer lies in the fact that each of these array of bytes is generated in a particular way. Let's look at these arrays:
>>> [hex(x) for x in b]
['0x70', '0x79', '0x74', '0x68', '0x6f', '0x6e']
>>> len(b)
6
This array indicates that for each character we have one byte (because all the characters fall below 127). Hence, we can say that "encoding" the string to 'utf-8' collects each character's corresponding code-point and puts it into the array. If the code point can not fit in one byte then utf-8 consumes two bytes. Hence utf-8 consumes least number of bytes possible.
>>> [hex(x) for x in b16]
['0xff', '0xfe', '0x70', '0x0', '0x79', '0x0', '0x74', '0x0', '0x68', '0x0', '0x6f', '0x0', '0x6e', '0x0']
>>> len(b16)
14 # (2 + 6*2)
Here we can see that "encoding to utf-16" first puts a two byte BOM (FF FE
) into the bytes array, and after that, for each character it puts two bytes into the array. (In our case, the second byte is always zero)
>>> [hex(x) for x in b32]
['0xff', '0xfe', '0x0', '0x0', '0x70', '0x0', '0x0', '0x0', '0x79', '0x0', '0x0', '0x0', '0x74', '0x0', '0x0', '0x0', '0x68', '0x0', '0x0', '0x0', '0x6f', '0x0', '0x0', '0x0', '0x6e', '0x0', '0x0', '0x0']
>>> len(b32)
28 # (2+ 6*4 + 2)
In the case of "encoding in utf-32", we first put the BOM, then for each character we put four bytes, and lastly we put two zero bytes into the array.
Hence, we can say that the "encoding process" collects 1 2 or 4 bytes (depending on the encoding name) for each character in the string and prepends and appends more bytes to them to create the final result array of bytes.
Now, my questions:
b
, b16
and b32
is actually a list of bytes. What is the memory representation of the string? Exactly what is stored in memory for a string?encode()
, each character's corresponding code point is collected (code point corresponding to the encoding name) and put into an array or bytes. What exactly happens when we do a decode()
?Python 3 - String decode() MethodThe decode() method decodes the string using the codec registered for encoding. It defaults to the default string encoding.
decode() is a method specified in Strings in Python 2. This method is used to convert from one encoding scheme, in which argument string is encoded to the desired encoding scheme. This works opposite to the encode. It accepts the encoding of the encoding string to decode it and returns the original string.
The encode() method encodes the string, using the specified encoding. If no encoding is specified, UTF-8 will be used.
UTF-8 is one of the most commonly used encodings, and Python often defaults to using it. UTF stands for “Unicode Transformation Format”, and the '8' means that 8-bit values are used in the encoding.
First of all, UTF-32 is a 4-byte encoding, so its BOM is a four byte sequence too:
>>> import codecs
>>> codecs.BOM_UTF32
b'\xff\xfe\x00\x00'
And because different computer architectures treat byte orders differently (called Endianess), there are two variants of the BOM, little and big endian:
>>> codecs.BOM_UTF32_LE
b'\xff\xfe\x00\x00'
>>> codecs.BOM_UTF32_BE
b'\x00\x00\xfe\xff'
The purpose of the BOM is to communicate that order to the decoder; read the BOM and you know if it is big or little endian. So, those last two null bytes in your UTF-32 string are part of the last encoded character.
The UTF-16 BOM is thus similar, in that there are two variants:
>>> codecs.BOM_UTF16
b'\xff\xfe'
>>> codecs.BOM_UTF16_LE
b'\xff\xfe'
>>> codecs.BOM_UTF16_BE
b'\xfe\xff'
It depends on your computer architecture which one is used by default.
UTF-8 doesn't need a BOM at all; UTF-8 uses 1 or more bytes per character (adding bytes as needed to encode more complex values), but the order of those bytes is defined in the standard. Microsoft deemed it necessary to introduce a UTF-8 BOM anyway (so its Notepad application could detect UTF-8), but since the order of the BOM never varies its use is discouraged.
As for what is stored by Python for unicode strings; that actually changed in Python 3.3. Before 3.3, internally at the C level, Python either stored UTF16 or UTF32 byte combinations, depending on whether or not Python was compiled with wide character support (see How to find out if Python is compiled with UCS-2 or UCS-4?, UCS-2 is essentially UTF-16 and UCS-4 is UTF-32). So, each character either takes 2 or 4 bytes of memory.
As of Python 3.3, the internal representation uses the minimal number of bytes required to represent all characters in the string. For plain ASCII and Latin1-encodable text 1 byte is used, for the rest of the BMP 2 bytes are used, and text containing characters beyond that 4 bytes are used. Python switches between the formats as needed. Thus, storage has become a lot more efficient for most cases. For more detail see What's New in Python 3.3.
I can strongly recommend you read up on Unicode and Python with:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With