I'm working on a project where I need a mature crawler to do some work, and I'm evaluating Nutch for this purpose. My current needs are relatively straightforward: I need a crawler that is able to save the data to disk and I need it to be able to recrawl only the updated resources of a site and skip the parts that are already crawled. Does anyone have any experience working with the Nutch code directly in Java, not via the command line. I would like to start simple: create a crawler (or similar), minimally configure it and start it, nothing fancy. Is there some example for this, or some resource I should be looking at? I'm going over the Nutch documentation, but most of it is about command line, search and other stuff. How usable is the Nutch crawling module without the need to index and search? Any help is appreciated. Thanks.
Nutch is very different than what you have ever practiced most probably. Because it is something like a framework it not only has front for query & search, athough solr seems more powerfull than the native Nutch search front end. It also has the crawling part and the indexing (into a Lucene indexe).
If you want to use the crawled for other purposes than search, you will need to developp your own programms and be familiar with Hadoop and MapReduce programming.
Not sure what you want to do with your crawling, but it doesn't look like Nutch is the solution
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With