Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How should my scraping "stack" handle 404 errors?

I have a rake task that is responsible for doing batch processing on millions of URLs. Because this process takes so long I sometimes find that URLs I'm trying to process are no longer valid -- 404s, site's down, whatever.

When I initially wrote this there was basically just one site that would continually go down while processing so my solution was to use open-uri, rescue any exceptions produced, wait a bit, and then retry.

This worked fine when the dataset was smaller but now so much time goes by that I'm finding URLs are no longer there anymore and produce a 404.

Using the case of a 404, when this happens my script just sits there and loops infinitely -- obviously bad.

How should I handle cases where a page doesn't load successfully, and more importantly how does this fit into the "stack" I've built?

I'm pretty new to this, and Rails, so any opinions on where I might have gone wrong in this design are welcome!

Here is some anonymized code that shows what I have:

The rake task that makes a call to MyHelperModule:

# lib/tasks/my_app_tasks.rake
namespace :my_app do
  desc "Batch processes some stuff @ a later time."
    task :process_the_batch => :environment do
      # The dataset being processed
      # is millions of rows so this is a big job 
      # and should be done in batches!
      MyModel.where(some_thing: nil).find_in_batches do |my_models|
        MyHelperModule.do_the_process my_models: my_models
      end
    end
  end
end

MyHelperModule accepts my_models and does further stuff with ActiveRecord. It calls SomeClass:

# lib/my_helper_module.rb
module MyHelperModule
  def self.do_the_process(args = {})
    my_models = args[:my_models]

    # Parallel.each(my_models, :in_processes => 5) do |my_model|
    my_models.each do |my_model|
      # Reconnect to prevent errors with Postgres
      ActiveRecord::Base.connection.reconnect!
      # Do some active record stuff

      some_var = SomeClass.new(my_model.id)

      # Do something super interesting,
      # fun,
      # AND sexy with my_model
    end
  end
end

SomeClass will go out to the web via WebpageHelper and process a page:

# lib/some_class.rb
require_relative 'webpage_helper'
class SomeClass
  attr_accessor :some_data

  def initialize(arg)
    doc = WebpageHelper.get_doc("http://somesite.com/#{arg}")
      # do more stuff
  end
end

WebpageHelper is where the exception is caught and an infinite loop is started in the case of 404:

# lib/webpage_helper.rb
require 'nokogiri'
require 'open-uri'

class WebpageHelper
  def self.get_doc(url)
    begin
      page_content = open(url).read
      # do more stuff
    rescue Exception => ex
      puts "Failed at #{Time.now}"
      puts "Error: #{ex}"
      puts "URL: " + url
      puts "Retrying... Attempt #: #{attempts.to_s}"
      attempts = attempts + 1
      sleep(10)
      retry
    end
  end
end
like image 766
Mario Zigliotto Avatar asked Jul 09 '12 03:07

Mario Zigliotto


People also ask

Do 404s hurt SEO?

404 error pages don't really hurt your SEO, but there's definitely a lot you can miss out if you don't fix them. If you have backlinks pointing to pages on your website that return a 404, try to fix those backlinks and 301 redirect your broken URLs to relevant location.

What is the most common message on error 404?

What is a 404 Error? A 404 error is a standard HTTP error message code that means the website you were trying to reach couldn't be found on the server. It's a client-side error, meaning either the webpage was removed or moved and the URL wasn't changed accordingly, or the person just typed in the URL incorrectly.

Does a 404 error happen when you follow a broken link?

One of the most common errors you can come across while browsing the Internet is a 404 or Page Not Found error. This error often occurs when you follow a broken link or if you type in a website address that doesn't exist. A 404 page error appears when a website is active, but the specific page within it doesn't exist.


2 Answers

TL;DR

Use out-of-band error handling and a different conceptual scraping model to speed up operations.

Exceptions Are Not for Common Conditions

There are a number of other answers that address how to handle exceptions for your use case. I'm taking a different approach by saying that handling exceptions is fundamentally the wrong approach here for a number of reasons.

  1. In his book Exceptional Ruby, Avdi Grimm provides some benchmarks showing the performance of exceptions as ~156% slower than using alternative coding techniques such as early returns.

  2. In The Pragmatic Programmer: From Journeyman to Master, the authors state "[E]xceptions should be reserved for unexpected events." In your case, 404 errors are undesirable, but are not at all unexpected--in fact, handling 404 errors is a core consideration!

In short, you need a different approach. Preferably, the alternative approach should provide out-of-band error handling and prevent your process from blocking on retries.

One Alternative: A Faster, More Atomic Process

You have a lot of options here, but the one I'm going to recommend is to handle 404 status codes as a normal result. This allows you to "fail fast," but also allows you to retry pages or remove URLs from your queue at a later time.

Consider this example schema:

ActiveRecord::Schema.define(:version => 20120718124422) do
  create_table "webcrawls", :force => true do |t|
    t.text     "raw_html"
    t.integer  "retries"
    t.integer  "status_code"
    t.text     "parsed_data"
    t.datetime "created_at",  :null => false
    t.datetime "updated_at",  :null => false
  end
end

The idea here is that you would simply treat the entire scrape as an atomic process. For example:

  • Did you get the page?

    Great, store the raw page and the successful status code. You can even parse the raw HTML later, in order to complete your scrapes as fast as possible.

  • Did you get a 404?

    Fine, store the error page and the status code. Move on quickly!

When your process is done crawling URLs, you can then use an ActiveRecord lookup to find all the URLs that recently returned a 404 status so that you can take appropriate action. Perhaps you want to retry the page, log a message, or simply remove the URL from your list of URLs to scrape--"appropriate action" is up to you.

By keeping track of your retry counts, you could even differentiate between transient errors and more permanent errors. This allows you to set thresholds for different actions, depending on the frequency of scraping failures for a given URL.

This approach also has the added benefit of leveraging the database to manage concurrent writes and share results between processes. This would allow you to parcel out work (perhaps with a message queue or chunked data files) among multiple systems or processes.

Final Thoughts: Scaling Up and Out

Spending less time on retries or error handling during the initial scrape should speed up your process significantly. However, some tasks are just too big for a single-machine or single-process approach. If your process speedup is still insufficient for your needs, you may want to consider a less linear approach using one or more of the following:

  • Forking background processes.
  • Using dRuby to split work among multiple processes or machines.
  • Maximizing core usage by spawning multiple external processes using GNU parallel.
  • Something else that isn't a monolithic, sequential process.

Optimizing the application logic should suffice for the common case, but if not, scaling up to more processes or out to more servers. Scaling out will certainly be more work, but will also expand the processing options available to you.

like image 75
Todd A. Jacobs Avatar answered Nov 04 '22 16:11

Todd A. Jacobs


Curb has an easier way of doing this and can be a better (and faster) option instead of open-uri.

Errors Curb reports (and that you can rescue from and do something:

http://curb.rubyforge.org/classes/Curl/Err.html

Curb gem: https://github.com/taf2/curb

Sample code:

def browse(url)
  c = Curl::Easy.new(url)
  begin
    c.connect_timeout = 3
    c.perform
    return c.body_str
  rescue Curl::Err::NotFoundError
    handle_not_found_error(url)
  end
end

def handle_not_found_error(url)
  puts "This is a 404!"
end
like image 26
Pedro Nascimento Avatar answered Nov 04 '22 16:11

Pedro Nascimento