Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Getting text from a URL in ASP.NET

Tags:

c#

asp.net

I am looking for a reliable way of extracting text given the web address, in ASP.NET/C#. Can anyone point me the right direction?

Also, the web address could be say a news site that might have a lot of ads and menus etc. I need some intelligent way of extracting only the relevant content. Not sure how this could be done as how would I define what relevance is?

Should I maybe read from a RSS feed? Any thoughts on this?

EDIT I Have added a bounty. I am looking to extract "relevant" text from a URL. From "relevant" I mean, it should exclude text from ads (and other irrelevant info). The input will be similar to a news site. I need to extract only the news info and get rid of the extraneous text

like image 897
Nick Avatar asked Feb 13 '10 02:02

Nick


2 Answers

Once you have downloaded the page, and started using a library like HTML Agility Pack to parse the html, then your work starts :)

Screen scraping is divided into two parts.

First the webcrawler (lots of information on this on the web, and simple code provided here with WebClient by some other answers). The crawler has to traverse links and download pages. If you are downloading a lot of pages and have the start url you could roll your own, or use an existing one. Check out Wikipedia for a list of open source webcrawlers/spiders.

The second part is parsing the html and pulling out only the text you want, and omit any noise (headers, banners, footers etc). Just traversing the DOM is easy with existing libraries, figuring out what to do with what you parse is the hard part.

I've written a bit about it before at another SO question and it might give you some ideas how to manually grab the content you want. From my experience there is no 100% way to find the main content of a page, and more often than not you need to manually give it some pointers. The difficult part is that if the html layout of the page change, then your screen scraper will start to fail.

You could apply statistics and compare the html of several pages in order to deduce where the ads, menus etc are, in order to eliminate those.

Since you mention news sites, there are two other approaches which should be easier to apply to these sites compared to parsing out the text from the original html.

  1. Check if the page has a print url. Eg. a link on CNN has an equivalent print url which is much easier to parse.
  2. Check if the page has a RSS representation, and pick the article text from the RSS feed instead. If the feed don't have all the content, it should give you enough text to locate the text in the full html page.

Also check out The Easy Way to Extract Useful Text from Arbitrary HTML for input to how to create a more general parser. The code is in Python but you should be able to convert it without too much trouble.

like image 112
Mikael Svenson Avatar answered Sep 19 '22 08:09

Mikael Svenson


I think you need a html parser like HTMLAgilityPack or you can use the new born baby.. YQL, its a new tool develop by Yahoo its syntax is like SQL and you need a little knowledge of XPATH...

http://developer.yahoo.com/yql/

Thank

like image 32
Shakeeb Ahmed Avatar answered Sep 20 '22 08:09

Shakeeb Ahmed