I am working with a database that has throughout it scattered characters like this: ’. I need to take this from the database, convert it to UTF-8, and then import it into a different database, using python. When printed to the Windows Command Prompt, these characters look like this: \xe2\u20ac\u2122. I have tried various combinations of .decode(), .encode(), and unicode() to convert the data, but Im really stuck.
Always decode on input, and encode on output. (There ought to be handy mnemonic for this: perhaps "take your code [coat] off when you come indoors".)
Decode on input: You say that the database encoding is "UTF_8_bin". Are you using MySQL-Python? If so, then you can set the use_unicode
option when you connect to the database. Then all strings are fetched from the database in Unicode, so you don't have worry about decoding them.
Encode on output: You can find out the current character encoding (or "code page" as they call it in Windows) with the chcp
command. Let's suppose it's code page 1252. Then you can write
print text.encode('windows-1252')
to produce something that you can read from the Windows command line.
If you're writing the strings back to another MySQL database using MySQL-Python, you shouldn't need to do anything special: MySQL-Python claims that "you can always write Unicode strings" (regardless of whether you specified use_unicode
when you opened the connection).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With