Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to decode cp1252 which is in decimal &#147 instead of \x93?

I am getting the source code of a webpage and the encoding is cp1252. Chrome displays the page correctly.

Here is my code:

import sys
from urllib.request import urlopen
from bs4 import BeautifulSoup, UnicodeDammit
import re
import codecs

url = "http://www.sec.gov/Archives/edgar/data/1400810/000119312513211026/d515005d10q.htm"
page = urlopen(url).read()
print(page)
# A little preview :
# b'...Regulation S-T (&#167;232.405 of this chapter) during the preceding 12 months (or for such shorter period that the\nregistrant was required to submit and post such files).&nbsp;&nbsp;&nbsp;&nbsp;Yes&nbsp;&nbsp;<FONT STYLE="FONT-FAMILY:WINGDINGS">&#120;</FONT>...'

soup = BeautifulSoup(page, from_encoding="cp1252")
print(str(soup).encode('utf-8'))
# Same preview section as above
# b'...Regulation S-T (\xc2\xa7232.405 of this chapter) during the preceding 12 months (or for such shorter period that the\nregistrant was required to submit and post such files).\xc2\xa0\xc2\xa0\xc2\xa0\xc2\xa0Yes\xc2\xa0\xc2\xa0<font style="FONT-FAMILY:WINGDINGS">x</font>'

From the preview section, we can see that
&nbsp\; = \xc2\xa0
&#167; = \xc2\xa7
&#120; = x

For the cp1252 encoding standard, I am refering to http://en.wikipedia.org/wiki/Windows-1252#Code_page_layout and /Lib/encodings/cp1252.py

When I use BeautifulSoup(page, from_encoding="cp1252") some characters are encoded correctly, but some others aren't.

character | decimal encoding | cp1252->utf-8 encoding
“ | &#147; | \xc2\x93 (wrong)
” | &#148; | \xc2\x94 (wrong)
X | &#120; | \xc2\x92 (wrong)
§ | &#167; | \xc2\xa7 (ok)
þ | &#254;
¨ | &#168;
' | &#146; | \xc2\x92 (wrong)
– | &#150;

I use this code to get equivalence:

characters = "’ “ ” X § þ ¨ ' –"
list = characters.split()

for ch in list:
    print(ch)
    cp1252 = ch.encode('cp1252')
    print(cp1252)

    decimal = cp1252[0]

    special = "&#" + str(decimal)
    print(special)
    print(ch.encode('utf-8'))
    print()

offenders = [120, 146]

for n in offenders:
    toHex = hex(n)
    print(toHex)
print()

#120
off = b'\x78'
print(off)
buff = off.decode('cp1252')
print(buff)
uni = buff.encode('utf-8')
print(uni)
print()

#146
off = b'\x92'
print(off)
buff = off.decode('cp1252')
print(buff)
uni = buff.encode('utf-8')
print(uni)
print()

output

’
b'\x92'
&#146
b'\xe2\x80\x99'

“
b'\x93'
&#147
b'\xe2\x80\x9c'

”
b'\x94'
&#148
b'\xe2\x80\x9d'

X
b'X'
&#88
b'X'

§
b'\xa7'
&#167
b'\xc2\xa7'

þ
b'\xfe'
&#254
b'\xc3\xbe'

¨
b'\xa8'
&#168
b'\xc2\xa8'

'
b"'"
&#39
b"'"

–
b'\x96'
&#150
b'\xe2\x80\x93'

0x78
0x92

b'x'
x
b'x'

b'\x92'
’
b'\xe2\x80\x99'

Some characters failed the copy-paste to the editor like weird X and weird ', so I've added some code to deal with that.

What can I do to the get \xe2\x80\x9d instead of \xc2\x94 for ” (&#148;)?

My setup:
Windows 7
Terminal : chcp 1252 + Lucida Console font
Python 3.3
BeautifulSoup 4

Looking forward to your answers

like image 714
Lazik Avatar asked Nov 03 '22 19:11

Lazik


2 Answers

A numeric character reference in HTML refers to a Unicode codepoint i.e., it doesn't depend on character encoding of the document e.g., &#148; is U+0094 CANCEL CHARACTER*.

b"\xe2\x80\x9d" bytes interpreted as utf-8 are U+201D RIGHT DOUBLE QUOTATION MARK:

u'\u201d'.encode('utf-8') == b'\xe2\x80\x9d'
u'\u201d'.encode('cp1252') == b'\x94'
u'\u201d'.encode('ascii', 'xmlcharrefreplace') == b'&#8221;'

To fix the code, remove unnecessary bits:

from urllib.request import urlopen
from bs4 import BeautifulSoup

url = "http://www.sec.gov/path/to.htm"
soup = BeautifulSoup(urlopen(url))
print(soup)

If it fails; try sys.stdout.buffer.write(soup.encode('cp1252')) or set PYTHONIOENCODING environment variable to cp1252:xmlcharrefreplace.

like image 171
jfs Avatar answered Nov 16 '22 18:11

jfs


This is what I ended up using

def reformatCp1252(match):
    codePoint = int(match.group(1))

    if 128 <= codePoint <= 159:
        return bytes([codePoint])
    else:
        return match.group()

localPage = urlopen(r_url).read()
formatedPage = re.sub(b'&#(\d+);', reformatCp1252, localPage, flags=re.I)
localSoup = BeautifulSoup(formatedPage, "lxml", from_encoding="windows-1252")

Notes: I am using bs4 with python3.3 in windows7

I discovered that the from_encoding to the BeautifulSoup really doesn't matter, you can put utf-8 or windows-1252 and it gives a full utf-8 encoding replacing windows-1252 encoding to utf-8.
Basically all the codepoints are interpreted as utf-8 and single byte \x? are interpretted as windows-1252.

As far as I know only character from 128 to 159 in the windows-1252 differ from the utf-8 characters.

For example, a mixed encoding (windows-1252 : \x93 and \x94 with utf-8 : &#376;) will output a transformation in utf-8 only.

byteStream = b'\x93Hello\x94 (\xa7232.405 of this chapter) &#376; \x87'
# with code above
print(localSoup.encode('utf-8'))
# and you can see that \x93 was transformed to its utf-8 equivalent.
like image 33
Lazik Avatar answered Nov 16 '22 18:11

Lazik