Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Mass Downloading of Webpages C#

Tags:

c#

web-crawler

My application requires that I download a large amount of webpages into memory for further parsing and processing. What is the fastest way to do it? My current method (shown below) seems to be too slow and occasionally results in timeouts.

for (int i = 1; i<=pages; i++)
{
    string page_specific_link = baseurl + "&page=" + i.ToString();

    try
    {    
        WebClient client = new WebClient();
        var pagesource = client.DownloadString(page_specific_link);
        client.Dispose();
        sourcelist.Add(pagesource);
    }
    catch (Exception)
    {
    }
}
like image 726
paradox Avatar asked Sep 19 '11 16:09

paradox


4 Answers

The way you approach this problem is going to depend very much on how many pages you want to download, and how many sites you're referencing.

I'll use a good round number like 1,000. If you want to download that many pages from a single site, it's going to take a lot longer than if you want to download 1,000 pages that are spread out across dozens or hundreds of sites. The reason is that if you hit a single site with a whole bunch of concurrent requests, you'll probably end up getting blocked.

So you have to implement a type of "politeness policy," that issues a delay between multiple requests on a single site. The length of that delay depends on a number of things. If the site's robots.txt file has a crawl-delay entry, you should respect that. If they don't want you accessing more than one page per minute, then that's as fast as you should crawl. If there's no crawl-delay, you should base your delay on how long it takes a site to respond. For example, if you can download a page from the site in 500 milliseconds, you set your delay to X. If it takes a full second, set your delay to 2X. You can probably cap your delay to 60 seconds (unless crawl-delay is longer), and I would recommend that you set a minimum delay of 5 to 10 seconds.

I wouldn't recommend using Parallel.ForEach for this. My testing has shown that it doesn't do a good job. Sometimes it over-taxes the connection and often it doesn't allow enough concurrent connections. I would instead create a queue of WebClient instances and then write something like:

// Create queue of WebClient instances
BlockingCollection<WebClient> ClientQueue = new BlockingCollection<WebClient>();
// Initialize queue with some number of WebClient instances

// now process urls
foreach (var url in urls_to_download)
{
    var worker = ClientQueue.Take();
    worker.DownloadStringAsync(url, ...);
}

When you initialize the WebClient instances that go into the queue, set their OnDownloadStringCompleted event handlers to point to a completed event handler. That handler should save the string to a file (or perhaps you should just use DownloadFileAsync), and then the client, adds itself back to the ClientQueue.

In my testing, I've been able to support 10 to 15 concurrent connections with this method. Any more than that and I run into problems with DNS resolution (`DownloadStringAsync' doesn't do the DNS resolution asynchronously). You can get more connections, but doing so is a lot of work.

That's the approach I've taken in the past, and it's worked very well for downloading thousands of pages quickly. It's definitely not the approach I took with my high performance Web crawler, though.

I should also note that there is a huge difference in resource usage between these two blocks of code:

WebClient MyWebClient = new WebClient();
foreach (var url in urls_to_download)
{
    MyWebClient.DownloadString(url);
}

---------------

foreach (var url in urls_to_download)
{
    WebClient MyWebClient = new WebClient();
    MyWebClient.DownloadString(url);
}

The first allocates a single WebClient instance that is used for all requests. The second allocates one WebClient for each request. The difference is huge. WebClient uses a lot of system resources, and allocating thousands of them in a relatively short time is going to impact performance. Believe me ... I've run into this. You're better off allocating just 10 or 20 WebClients (as many as you need for concurrent processing), rather than allocating one per request.

like image 198
Jim Mischel Avatar answered Nov 08 '22 14:11

Jim Mischel


Why not just use a web crawling framework. It can handle all the stuff for you like (multithreading, httprequests, parsing links, scheduling, politeness, etc..).

Abot (https://code.google.com/p/abot/) handles all that stuff for you and is written in c#.

like image 31
sjdirect Avatar answered Nov 08 '22 16:11

sjdirect


In addition to @Davids perfectly valid answer, I want to add a slightly cleaner "version" of his approach.

var pages = new List<string> { "http://bing.com", "http://stackoverflow.com" };
var sources = new BlockingCollection<string>();

Parallel.ForEach(pages, x =>
{
    using(var client = new WebClient())
    {
        var pagesource = client.DownloadString(x);
        sources.Add(pagesource);
    }
});

Yet another approach, that uses async:

static IEnumerable<string> GetSources(List<string> pages)
{
    var sources = new BlockingCollection<string>();
    var latch = new CountdownEvent(pages.Count);

    foreach (var p in pages)
    {
        using (var wc = new WebClient())
        {
            wc.DownloadStringCompleted += (x, e) =>
            {
                sources.Add(e.Result);
                latch.Signal();
            };

            wc.DownloadStringAsync(new Uri(p));
        }
    }

    latch.Wait();

    return sources;
}
like image 35
ebb Avatar answered Nov 08 '22 15:11

ebb


You should use parallel programming for this purpose.

There are a lot of ways to achieve what u want; the easiest would be something like this:

var pageList = new List<string>();

for (int i = 1; i <= pages; i++)
{
  pageList.Add(baseurl + "&page=" + i.ToString());
}


// pageList  is a list of urls
Parallel.ForEach<string>(pageList, (page) =>
{
  try
    {
      WebClient client = new WebClient();
      var pagesource = client.DownloadString(page);
      client.Dispose();
      lock (sourcelist)
      sourcelist.Add(pagesource);
    }

    catch (Exception) {}
});
like image 32
David Avatar answered Nov 08 '22 15:11

David