I want to extract from a webpage all URLs how can I do that with nokogiri?
example:
<div class="heat"> <a href='http://example.org/site/1/'>site 1</a> <a href='http://example.org/site/2/'>site 2</a> <a href='http://example.org/site/3/'>site 3</a> </diV>
result should be an list:
l = ['http://example.org/site/1/', 'http://example.org/site/2/', 'http://example.org/site/3/'
You can do it like this:
doc = Nokogiri::HTML.parse(<<-HTML_END)
<div class="heat">
<a href='http://example.org/site/1/'>site 1</a>
<a href='http://example.org/site/2/'>site 2</a>
<a href='http://example.org/site/3/'>site 3</a>
</div>
<div class="wave">
<a href='http://example.org/site/4/'>site 4</a>
<a href='http://example.org/site/5/'>site 5</a>
<a href='http://example.org/site/6/'>site 6</a>
</div>
HTML_END
l = doc.css('div.heat a').map { |link| link['href'] }
This solution finds all anchor elements using a css selector and collects their href attributes.
ok this code works perfect for me, thanks to sris
p doc.xpath('//div[@class="heat"]/a').map { |link| link['href'] }
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With