I'm trying to build a very small, niche search engine, using Nutch to crawl specific sites. Some of the sites are news/blog sites. If I crawl, say, techcrunch.com, and store and index their frontpage or any of their main pages, then within hours my index for that page will be out of date.
Does a large search engine such as Google have an algorithm to re-crawl frequently updated pages very frequently, hourly even? Or does it just score frequently updated pages very low so they don't get returned?
How can I handle this in my own application?
Good question. This is actually an active topic in WWW
research community. The technique involved is called Re-crawl Strategy or Page Refresh Policy.
As I know there are three different factors that were considered in the literature:
poisson process
to model the change of web pages.You might want to decide which factor is more important for your application and users. Then you can check the below reference for more details.
Edit: I briefly discuss the frequency estimator mentioned in [2] to get you started. Based on this, you should be able to figure out what might be useful to you in the other papers. :)
Please follow the order I pointed out below to read this paper. It should not be too hard to understand as long as you know some probability and stats 101 (maybe much less if you just take the estimator formula):
Step 1. Please go to Section 6.4 -- Application to a Web crawler. Here Cho listed 3 approaches to estimate the web page change frequency.
Step 2. The naive policy. Please go to section 4. You will read:
Intuitively, we may use
X/T
(X
:the number of detected changes,T
: monitoring period) as the estimated frequency of change.
The subsequence section 4.1 just proved this estimation is biased7, in-consistant8 and in-efficient9.
Step 3. The improved estimator. Please go to section 4.2. The new estimator looks like below:
where \bar X
is n - X
(the number of accesses that the element did not change) and n
is the number of accesses. So just take this formula and estimate the change frequency. You don't need to understand the proof in the rest of the sub-section.
Step 4. There are some tricks and useful techniques discussed in Section 4.3 and Section 5 that might be helpful to you. Section 4.3 discussed how to deal with irregular intervals. Section 5 solved the question: When the last-modication date of an element is available, how can we use it to estimate change frequency? The proposed estimator using last-modification date is shown below:
The explanation to the above algorithm after Fig.10 in the paper is very clear.
Step 5. Now if you have interest, you can take a look at the experiment setup and results in section 6.
So that's it. If you feel more confident now, go ahead and try the freshness paper in [1].
References
[1] http://oak.cs.ucla.edu/~cho/papers/cho-tods03.pdf
[2] http://oak.cs.ucla.edu/~cho/papers/cho-freq.pdf
[3] http://hal.inria.fr/docs/00/07/33/72/PDF/RR-3317.pdf
[4] http://wwwconference.org/proceedings/www2005/docs/p401.pdf
[5] http://www.columbia.edu/~js1353/pubs/wolf-www02.pdf
[6] http://infolab.stanford.edu/~olston/publications/www08.pdf
Google's algorithms are mostly closed, they won't tell how they do it.
I built a crawler using the concept of a directed graph and based the re-crawl rate on pages' degree centrality. You could consider a website to be a directed graph with pages as nodes and hyperlinks as edges. A node with high centrality will probably be a page that is updated more often. At least, that is the assumption.
This can be implemented by storing URLs and the links between them. If you crawl and don't throw away any links, the graph per site will grow. Calculating for every node per site the (normalised) in- and outdegree will then give you a measure of which page is most interesting to re-crawl more often.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With