Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

HTML scraping using lxml and requests gives a unicode error [duplicate]

Tags:

I'm trying to use HTML scraper like the one provided here. It works fine for the example they provided. However, when I try using it with my webpage, I receive this error - Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration. I've tried googling but couldn't find a solution. I'd truly appreciate any help. I'd like to know if there's a way to copy it as HTML using Python.

Edit:

from lxml import html import requests page = requests.get('http://cancer.sanger.ac.uk/cosmic/gene/analysis?ln=PTEN&ln1=PTEN&start=130&end=140&coords=bp%3AAA&sn=&ss=&hn=&sh=&id=15#') tree = html.fromstring(page.text) 

Thank you.

like image 400
user3783999 Avatar asked Jul 29 '14 19:07

user3783999


1 Answers

Short answer: use page.content, not page.text.

From http://lxml.de/parsing.html#python-unicode-strings :

the parsers in lxml.etree can handle unicode strings straight away ... This requires, however, that unicode strings do not specify a conflicting encoding themselves and thus lie about their real encoding

From http://docs.python-requests.org/en/latest/user/quickstart/#response-content :

Requests will automatically decode content from the server [as r.text]. ... You can also access the response body as bytes [as r.content].

So you see, both requests.text and lxml.etree want to decode the utf-8 to unicode. But if we let requests.text do the decoding, then the encoding statement inside the xml file becomes a lie.

So, let's have requests.content do no decoding. That way lxml will receive a consistently undecoded file.

like image 84
Robᵩ Avatar answered Dec 30 '22 22:12

Robᵩ