I've read in several stackoverflow answers that some characters do not directly map (or are even "unmappable") when converting from Cp1252 (aka Windows-1252; they're the same, aren't they?) to UTF-8, e.g. here: https://stackoverflow.com/a/23399926/2018047
Can someone please shed some more light on this? Does that mean that if I batch/mass convert source code from cp1252 to utf-8 I'll get some characters that will end up as garbage?
Yes. 0xC0, 0xC1, 0xF5, 0xF6, 0xF7, 0xF8, 0xF9, 0xFA, 0xFB, 0xFC, 0xFD, 0xFE, 0xFF are invalid UTF-8 code units. A UTF-8 code unit is 8 bits. If by char you mean an 8-bit byte, then the invalid UTF-8 code units would be char values that do not appear in UTF-8 encoded text.
UTF-8 supports any unicode character, which pragmatically means any natural language (Coptic, Sinhala, Phonecian, Cherokee etc), as well as many non-spoken languages (Music notation, mathematical symbols, APL).
Windows-1252 is a subset of UTF-8 in terms of 'what characters are available', but not in terms of their byte-by-byte representation. Windows-1252 has characters between bytes 127 and 255 that UTF-8 has a different encoding for. Any visible character in the ASCII range (127 and below) are encoded 1:1 in UTF-8.
UTF-8 is a character encoding system. It lets you represent characters as ASCII text, while still allowing for international characters, such as Chinese characters. As of the mid 2020s, UTF-8 is one of the most popular encoding systems.
This is how Windows 1252 codepage looks like.
As you can see, bytes 0x81, 0x8D, 0x8F, 0x90, 0x9D do not have anything assigned to them.
If your input file contains those bytes, and you treat it as if it was in Windows 1252 encoding, those bytes will be treated as invalid characters. In normal circumstances, this means that the input file was not in Windows 1252.
All other bytes encode either printable characters or control characters, and all those characters are present in Unicode and therefore can unambiguously be encoded in UTF-8.
I have no idea what the linked answer is trying to claim, its last paragraph sounds like nonsense.
Several more remarks, which may shine some light on what you are trying to get to know:
UTF-8 and Windows 1252 are totally incompatible with each other outside ASCII
both of those encodings will never encode text to certain byte values, different ones in each case
moreover, certain byte sequences are also invalid in UTF-8
in general, if you treat a file as if it contained text encoded in UTF-8 or Windows 1252, but it doesn't, you will lose and corrupt data
You can select the encoding of your files in your IDE or editor. It's recommended to go UTF-8 only. You will have to convert existing Windows 1252 files.
Can someone please shed some more light on this?
The cp1252 decoding function is mostly an identity function.
cp1252 UCP (UCP = Unicode Code Point)
-------- --------
21 21 (!) (All numbers in hex)
31 31 (1)
41 41 (A)
This makes it seem like something expecting UCP (not UTF-8) will also accept cp1252. The author of the linked Answer is pointing out that this is not the case.
cp1252 UCP
-------- --------
80 20AC (€)
85 2026 (…)
99 2122 (™)
The exceptions are all found between 80 and 9F, inclusive.
Something that accepts UCP will also accept iso-8859-1, but not cp1252.
Does that mean that if I batch/mass convert source code from cp1252 to utf-8 I'll get some characters that will end up as garbage?
No. Every character in cp1252 maps to a Unicode Code, so it can successfully be converted to UTF-8 using a proper tool.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With