Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Crawl a website, get the links, crawl the links with PHP and XPATH

I want to crawl an entire website , I have read several threads but I cannot manage to get data in a 2nd level.

That is, I can return the links from a starting page but then I cannot find a way to parse the links and get the content of each link...

The code I use is:

<?php

    //  SELECT STARTING PAGE
      $url = 'http://mydomain.com/';
      $html= file_get_contents($url);

     // GET ALL THE LINKS OF EACH PAGE

         // create a dom object

            $dom = new DOMDocument();
            @$dom->loadHTML($html);

         // run xpath for the dom

            $xPath = new DOMXPath($dom);


         // get links from starting page

            $elements = $xPath->query("//a/@href");
            foreach ($elements as $e) {
            echo $e->nodeValue. "<br />";
            }

     // Parse each page using the extracted links?

 ?>

Could somebody help me out for the last part with an example?

I will be really much appreciated!


Well , thanx for your answers! I tried some stuff but I Haven't managet to get any results yet - I am new to programming..

Below, you can find 2 of my attempts - the 1st trying to parse the links and in the second trying to replace file_get contents with Curl:

 1) 

<?php 
  //  GET STARTING PAGE
  $url = 'http://www.capoeira.com.gr/';
  $html= file_get_contents($url);

  //GET ALL THE LINKS FROM STARTING PAGE

  // create a dom object

    $dom = new DOMDocument();
    @$dom->loadHTML($html);


    // run xpath for the dom

    $xPath = new DOMXPath($dom);

        // get specific elements from the sites

        $elements = $xPath->query("//a/@href");
//PARSE EACH LINK

    foreach($elements as $e) {
          $URLS= file_get_contents($e);
          $dom = new DOMDocument();
          @$dom->loadHTML($html);
          $xPath = new DOMXPath($dom);
          $output = $xPath->query("//div[@class='content-entry clearfix']");
         echo $output ->nodeValue;
        }                           
         ?>

For the above code I get Warning: file_get_contents() expects parameter 1 to be string, object given in ../example.php on line 26

2)

    <?php
          $curl = curl_init();
          curl_setopt($curl, CURLOPT_POST, 1);
          curl_setopt($curl, CURLOPT_URL, "http://capoeira.com.gr");
          curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
          $content= curl_exec($curl);
          curl_close($curl);    



          $dom = new DOMDocument();
          @$dom->loadHTML($content);

           $xPath = new DOMXPath($dom);
           $elements = $xPath->query("//a/@href");
            foreach ($elements as $e) {
            echo $e->nodeValue. "<br />";
            }

   ?>

I get no results. I tried to echo $content and then I get :

You don't have permission to access / on this server.

Additionally, a 413 Request Entity Too Large error was encountered while trying to use an ErrorDocument to handle the request...

Any ideas please?? :)

like image 502
taz Avatar asked Apr 11 '12 15:04

taz


2 Answers

You can try the following. See this thread for more details

<?php
//set_time_limit (0);
function crawl_page($url, $depth = 5){
$seen = array();
if(($depth == 0) or (in_array($url, $seen))){
    return;
}   
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_TIMEOUT, 30);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
$result = curl_exec ($ch);
curl_close ($ch);
if( $result ){
    $stripped_file = strip_tags($result, "<a>");
    preg_match_all("/<a[\s]+[^>]*?href[\s]?=[\s\"\']+"."(.*?)[\"\']+.*?>"."([^<]+|.*?)?<\/a>/", $stripped_file, $matches, PREG_SET_ORDER ); 
    foreach($matches as $match){
        $href = $match[1];
            if (0 !== strpos($href, 'http')) {
                $path = '/' . ltrim($href, '/');
                if (extension_loaded('http')) {
                    $href = http_build_url($href , array('path' => $path));
                } else {
                    $parts = parse_url($href);
                    $href = $parts['scheme'] . '://';
                    if (isset($parts['user']) && isset($parts['pass'])) {
                        $href .= $parts['user'] . ':' . $parts['pass'] . '@';
                    }
                    $href .= $parts['host'];
                    if (isset($parts['port'])) {
                        $href .= ':' . $parts['port'];
                    }
                    $href .= $path;
                }
            }
            crawl_page($href, $depth - 1);
        }
}   
echo "Crawled {$href}";
}   
crawl_page("http://www.sitename.com/",3);
?>
like image 51
Team Webgalli Avatar answered Oct 31 '22 02:10

Team Webgalli


$doc = new DOMDocument; 
$doc->load('file.htm'); 

$items = $doc->getElementsByTagName('a'); 

foreach($items as $value) { 
    echo $value->nodeValue . "\n"; 
    $attrs = $value->attributes; 
    echo $attrs->getNamedItem('href')->nodeValue . "\n"; 
}; 
like image 34
Daniel W. Avatar answered Oct 31 '22 02:10

Daniel W.