Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python and BeautifulSoup encoding issues [duplicate]

I'm writing a crawler with Python using BeautifulSoup, and everything was going swimmingly till I ran into this site:

http://www.elnorte.ec/

I'm getting the contents with the requests library:

r = requests.get('http://www.elnorte.ec/')
content = r.content

If I do a print of the content variable at that point, all the spanish special characters seem to be working fine. However, once I try to feed the content variable to BeautifulSoup it all gets messed up:

soup = BeautifulSoup(content)
print(soup)
...
<a class="blogCalendarToday" href="/component/blog_calendar/?year=2011&amp;month=08&amp;day=27&amp;modid=203" title="1009 artículos en este día">
...

It's apparently garbling up all the spanish special characters (accents and whatnot). I've tried doing content.decode('utf-8'), content.decode('latin-1'), also tried messing around with the fromEncoding parameter to BeautifulSoup, setting it to fromEncoding='utf-8' and fromEncoding='latin-1', but still no dice.

Any pointers would be much appreciated.

like image 231
David Avatar asked Aug 28 '11 06:08

David


3 Answers

In your case this page has wrong utf-8 data which confuses BeautifulSoup and makes it think that your page uses windows-1252, you can do this trick:

soup = BeautifulSoup.BeautifulSoup(content.decode('utf-8','ignore'))

by doing this you will discard any wrong symbols from the page source and BeautifulSoup will guess the encoding correctly.

You can replace 'ignore' by 'replace' and check text for '?' symbols to see what has been discarded.

Actually it's a very hard task to write crawler which can guess page encoding every time with 100% chance(Browsers are very good at this nowadays), you can use modules like 'chardet' but, for example, in your case it will guess encoding as ISO-8859-2, which is not correct too.

If you really need to be able to get encoding for any page user can possibly supply - you should either build a multi-level(try utf-8, try latin1, try etc...) detection function(like we did in our project) or use some detection code from firefox or chromium as C module.

like image 183
Riz Avatar answered Nov 16 '22 16:11

Riz


could you try:

r = urllib.urlopen('http://www.elnorte.ec/')
x = BeautifulSoup.BeautifulSoup(r.read)
r.close()

print x.prettify('latin-1')

I get the correct output. Oh, in this special case you could also x.__str__(encoding='latin1').

I guess this is because the content is in ISO-8859-1(5) and the meta http-equiv content-type incorrectly says "UTF-8".

Could you confirm?

like image 43
Gaikokujin Kun Avatar answered Nov 16 '22 17:11

Gaikokujin Kun


You can try this, which works for every encoding

from bs4 import BeautifulSoup
from bs4.dammit import EncodingDetector
headers = {"User-Agent": USERAGENT}
resp = requests.get(url, headers=headers)
http_encoding = resp.encoding if 'charset' in resp.headers.get('content-type', '').lower() else None
html_encoding = EncodingDetector.find_declared_encoding(resp.content, is_html=True)
encoding = html_encoding or http_encoding
soup = BeautifulSoup(resp.content, 'lxml', from_encoding=encoding)
like image 7
Shawn Avatar answered Nov 16 '22 17:11

Shawn