I'm processing some data files that are supposed to be valid UTF-8 but aren't, which causes the parser (not under my control) to fail. I'd like to add a stage of pre-validating the data for UTF-8 well-formedness, but I've not yet found a utility to help do this.
There's a web service at W3C which appears to be dead, and I've found a Windows-only validation tool that reports invalid UTF-8 files but doesn't report which lines/characters to fix.
I'd be happy with either a tool I can drop in and use (ideally cross-platform), or a ruby/perl script I can make part of my data loading process.
Open the file in Notepad. Click 'Save As...'. In the 'Encoding:' combo box you will see the current file format. Yes, I opened the file in notepad and selected the UTF-8 format and saved it.
There are a few options you can use: check the content-type to see if it includes a charset parameter which would indicate the encoding (e.g. Content-Type: text/plain; charset=utf-16 ); check if the uploaded data has a BOM (the first few bytes in the file, which would map to the unicode character U+FEFF - 2 bytes for ...
Could be simpler by using only one line: codecs. open("path/to/file", encoding="utf-8", errors="strict").
To verify if a file passes an encoding such as ascii, iso-8859-1, utf-8 or whatever then a good solution is to use the 'iconv' command.
You can use GNU iconv:
$ iconv -f UTF-8 your_file -o /dev/null; echo $?
Or with older versions of iconv, such as on macOS:
$ iconv -f UTF-8 your_file > /dev/null; echo $?
The command will return 0 if the file could be converted successfully, and 1 if not. Additionally, it will print out the byte offset where the invalid byte sequence occurred.
Edit: The output encoding doesn't have to be specified, it will be assumed to be UTF-8.
Use python and str.encode|decode functions.
>>> a="γεια" >>> a '\xce\xb3\xce\xb5\xce\xb9\xce\xb1' >>> b='\xce\xb3\xce\xb5\xce\xb9\xff\xb1' # note second-to-last char changed >>> print b.decode("utf_8") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.5/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 6: unexpected code byte
The exception thrown has the info requested in its .args property.
>>> try: print b.decode("utf_8") ... except UnicodeDecodeError, exc: pass ... >>> exc UnicodeDecodeError('utf8', '\xce\xb3\xce\xb5\xce\xb9\xff\xb1', 6, 7, 'unexpected code byte') >>> exc.args ('utf8', '\xce\xb3\xce\xb5\xce\xb9\xff\xb1', 6, 7, 'unexpected code byte')
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With