Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I use Perl to grab text from a web page that is dynamically generated with JavaScript?

There is a website I am trying to pull information from in Perl, however the section of the page I need is being generated using javascript so all you see in the source is:

<div id="results"></div>

I need to somehow pull out the contents of that div and save it to a file using Perl/proxies/whatever. e.g. the information I want to save would be

document.getElementById('results').innerHTML;

I am not sure if this is possible or if anyone had any ideas or a way to do this. I was using a lynx source dump for other pages but since I cant straight forward screen scrape this page I came here to ask about it!

If anyone is interested, the page is http://downloadcenter.trendmicro.com/index.php?clk=left_nav&clkval=pattern_file&regs=NABU and the info I am trying to get is the row about the ConsumerOPR

like image 224
user318747 Avatar asked Apr 16 '10 17:04

user318747


3 Answers

You'll need to reverse-engineer what the Javascript is doing. Does it fire off an AJAX request to populate the <div>? If so, it should be pretty easy to sniff the request using Firebug and then duplicate it with LWP::UserAgent or WWW::Mechanize to get the information.

If the Javascript is just doing pure DOM manipulation, then that means the data must exist somewhere else in the page or the Javascript already. So figure out where it's coming from and grab it.

Finally, if none of those options are adequate, you may need to just use a real browser to do it. There are a few options for automating browser behavior, like WWW::Mechanize::Firefox or Win32::IE::Mechanize.

like image 194
friedo Avatar answered Nov 13 '22 15:11

friedo


Bringing the Browser to the Server by John Resig might be useful.

like image 24
farinspace Avatar answered Nov 13 '22 16:11

farinspace


As the content of your page is generated by some Javascript, you need the ability to :

  • Execute some Javascript code
    • Even, possibly, some complex JS code, doing Ajax requests and all that ?
  • And do it with an engine that supports the functions/methods that are present in a browser (like DOM manipulations)


A solution could be to actually really start a browser to navigate to that page, and, then, parse the page that's loaded by it, to extract the information ?

I've never used this for grabbing, but the Selenium suite might help, here : using Selenium RC, you can start a real browser, and pilot it -- then, you have functions to get data from it.

It's not quite fast, and it's pretty heavy (it has to start a browser !), but it works quite well : you'll be using Firefox, for example, to navigate to your page -- which means a real Javascript engine, that's used every day by a lot of people ;-)

like image 4
Pascal MARTIN Avatar answered Nov 13 '22 16:11

Pascal MARTIN