I heard once that this could be done using Curl, but I don't want to display all contents from an external site on my site, but only the contents from a particular div. How can this be done?
You can use PHP Simple DOM Parser to grab a page and easily select parts of it.
As easy as:
$html = file_get_html('http://www.google.com/');
$ret = $html->find('div[id=foo]');
Documentation here.
If what you want to do is grab the header of http://www.freeoh.net/, the following code will work. You need to place simple_html_dom.php and a file called page.txt (make sure the script has privileges to read and write to it) in the same folder as the following script. (I'm assuming you already have cURL enabled, as you mentioned it in your question.)
<?php
include 'simple_html_dom.php';
$curl = curl_init();
curl_setopt ($curl, CURLOPT_URL, "http://www.freeoh.net/");
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_USERAGENT, "Mozilla/5.0 (compatible; MSIE 5.01; Windows NT 5.0)");
curl_setopt($curl, CURLOPT_AUTOREFERER, 1);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($curl, CURLOPT_REFERER, "http://www.freeoh.net/");
$result = curl_exec ($curl);
curl_close ($curl);
//write contents of $result to file
$File = "page.txt";
$fh = fopen($File, 'w') or die("can't open file");
fwrite($fh, $result);
fclose($fh);
//turn file into dom object
$page = file_get_html("page.txt");
$header = $page->find("div", 1);
echo $header;
?>
It's a little hacky because I used cURL to grab the page and then needed to store it somewhere so that PHP Simple HTML Dom parser would parse it properly, but it works.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With