I'm using lxml in Python to parse some HTML and I want to extract all link to images. The way I do it right now is:
//a[contains(@href,'.jpg') or contains(@href,'.jpeg') or ... (etc)]
There are a couple of problem with this approach:
I wanted to use regexp, but I failed:
//a[regx:match(@href,'.*\.(?:png|jpg|jpeg)')]
This returned me all links all the time ...
Does anyone knows the right, elegant way to do this or what is wrong with my regexp approach ?
Instead of:
a[contains(@href,'.jpg')]
Use:
a[substring(@href, string-length(@href)-3)='.jpg']
(and the same expression pattern for the other possible endings).
The above expression is the XPath 1.0 equivalent to the following XPath 2.0 expression:
a[ends-with(@href, '.jpg')]
Use XPath to return all <a>
elements and use a Python list comprehension to filter down to those matching your regex.
lxml
supports regular expressions in EXSLT namespace:
from lxml import html
# download & parse web page
doc = html.parse('http://apod.nasa.gov/apod/astropix.html')
# find the first <a href that ends with .png or .jpg or .jpeg ignoring case
ns = {'re': "http://exslt.org/regular-expressions"}
img_url = doc.xpath(r"//a[re:test(@href, '\.(?:png|jpg|jpeg)', 'i')]/@href",
namespaces=ns, smart_strings=False)[0]
print(img_url)
Because there's no guarantee that the link has a file extension at all, or that the file extension even matches the content (.jpg URLs returning error HTML, for example) that limits your options.
The only correct way to gather all images from a site would be to get every link and query it with an HTTP HEAD request to find out what Content-type the server is sending for it. If the content type is image/(anything) it's an image, otherwise it's not.
Scraping the URLs for common file extensions is probably going to get you 99.9% of images though. It's not elegant, but neither is most HTML. I recommend being happy to settle for 99.9% in this case. The extra 0.1% isn't worth it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With