Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Where to store web crawler data?

I have a simple web crawler that starts at root (given url) downloads the html of the root page then scans for hyperlinks and crawls them. I currently store the html pages in an SQL database. I am currently facing two problems:

  1. It seems like the crawling reaches a bottleneck and isn't able to crawler faster, I've read somewhere that making multi-threaded http requests for pages can make the crawler crawl faster, but I am not sure on how to do this.

  2. The second problem, I need an efficient data structure to store the html pages and be able to run data mining operations on them (currently using an SQL database would like to hear other recommendations)

I am using the .Net framework, C# and MS SQL

like image 574
Mike G Avatar asked Jan 17 '12 01:01

Mike G


People also ask

What does a web crawler store?

A web crawler, spider, or search engine bot downloads and indexes content from all over the Internet. The goal of such a bot is to learn what (almost) every webpage on the web is about, so that the information can be retrieved when it's needed.

Does Amazon allow web crawlers?

You can use Amazon Kendra Web Crawler to crawl and index webpages. To use Web Crawler in the console, go to the Amazon Kendra console , select your index and then select Data sources from the navigation menu to add Web Crawler.

Is website crawling legal?

So the big question is: Is web scraping legal or illegal? Web scraping and crawling aren't illegal by themselves, provided you follow compliance.

Is Google a web crawler?

Most of our Search index is built through the work of software known as crawlers. These automatically visit publicly accessible webpages and follow links on those pages, much like you would if you were browsing content on the web.


2 Answers

So first and foremost, I wouldn't worry about getting into distributed crawling and storage, because as the name suggests: it requires a decent number of machines for you to get good results. Unless you have a farm of computers, then you won't be able to really benefit from it. You can build a crawler that gets 300 pages per second and run it on a single computer with 150 Mbps connection.

The next thing on the list is to determine where is your bottleneck.

Benchmark Your System

Try to eliminate MS SQL:

  • Load a list of, say, 1000 URLs that you want to crawl.
  • Benchmark how fast you can crawl them.

If 1000 URLs doesn't give you a large enough crawl, then get 10000 URLs or 100k URLs (or if you're feeling brave, then get the Alexa top 1 million). In any case, try to establish a baseline with as many variables excluded as possible.

Identify Bottleneck

After you have your baseline for the crawl speed, then try to determine what's causing your slowdown. Furthermore, you will need to start using multitherading, because you're i/o bound and you have a lot of spare time in between fetching pages that you can spend in extracting links and doing other things like working with the database.

How many pages per second are you getting now? You should try and get more than 10 pages per second.

Improve Speed

Obviously, the next step is to tweak your crawler as much as possible:

  • Try to speed up your crawler so it hits the hard limits, such as your bandwidth.
  • I would recommend using asynchronous sockets, since they're MUCH faster than blocking sockets, WebRequest/HttpWebRequest, etc.
  • Use a faster HTML parsing library: start with HtmlAgilityPack and if you're feeling brave then try the Majestic12 HTML Parser.
  • Use an embedded database, rather than an SQL database and take advantage of the key/value storage (hash the URL for the key and store the HTML and other relevant data as the value).

Go Pro!

If you've mastered all of the above, then I would suggest you try to go pro! It's important that you have a good selection algorithm that mimics PageRank in order to balance freshness and coverage: OPIC is pretty much the latest and greatest in that respect (AKA Adaptive Online Page Importance Computation). If you have the above tools, then you should be able to implement OPIC and run a fairly fast crawler.

If you're flexible on the programming language and don't want to stray too far from C#, then you can try the Java-based enterprise level crawlers such as Nutch. Nutch integrates with Hadoop and all kinds of other highly scalable solutions.

like image 181
Kiril Avatar answered Oct 16 '22 16:10

Kiril


This is what Google's BigTable was designed for. HBase is a popular open-source clone, but you'll need to deal with Java and (probably) Linux. Cassandra is also written in Java, but runs on Windows. Both allow for .NET clients.

Because they are designed to be distributed across many machines (implementations in the thousands of nodes exist), they can sustain extremely heavy read/write loads, far more than even the fastest SQL Server or Oracle hardware could.

If you are not comfortable with Java infrastructure, you might want to look into Microsoft's Azure Table Storage, for similar characteristics. That's a hosted/cloud solution though- you can't run it on your own hardware.

As for processing the data, if you go for HBase or Cassandra you can use Hadoop MapReduce. MR was popularized by Google for exactly the task you are describing- processing huge amounts of web data. In a nutshell, the idea is that rather than running your algorithm in one place and piping all of the data through it, MapReduce sends your program out to run on the machines where the data is stored. It allows you to run algorithms on basically unlimited amounts of data, assuming that you have the hardware for it.

like image 32
Chris Shain Avatar answered Oct 16 '22 15:10

Chris Shain