Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Executing JavaScript in href of links with Python

I am trying to download some PDF's automatically on a site (http://bibliotecadigitalhispanica.bne.es) using Python.

I've tried using the urllib/urllib2/mechanize modules (which I have been using for other sites: this includes the standard functions like urlopen, urlretrieve, etc.), but here, the links have JavaScript embedded in their href attributes that does some processing and opens up the PDF, which these modules don't seem to be able to handle, at least from what I have read here. For example, when I do the following:

request = mechanize.Request('the example url below')
response = mechanize.urlopen(request)

it just get back the containing HTML page - I just can't seem to extract the PDF (there are no links to it inside that page, either).

I know by looking through the headers in a real browser (using the LiveHTTPHeaders extension in Firefox) that a lot of HTTP requests are made and eventually the PDF is returned (and displayed in the browser). I would like to be able to intercept this and download it. Concretely, I get a series of 302 and 304 responses, eventually leading to the PDF.

Here is an example of a link attribute that I am crawling: href='javascript:open_window_delivery("http://bibliotecadigitalhispanica.bne.es:80/verylonglinktoaccess");'

It seems that if I execute this JavaScript embedded in the href attribute, I can eventually reach the PDF document itself. I've tried with selenium, but it is a tad confusing - I'm not quite sure how to use it upon reading its documentation. Can someone suggest a way (either through a module I haven't tried or through one that I have) that I can do this?

Thank you very much for any help with this.

P.S.: in case you would like to see what I am trying to replicate, I am trying to access the PDF links mentioned above on the following page (the ones with the PDF icons):): http://bibliotecadigitalhispanica.bne.es/R/9424CFL1MDQGLGBB98QSV1HFAD2APYDME4GQKCBSLXFX154L4G-01075?func=collections-result&collection_id=1356

like image 235
spanport Avatar asked Mar 16 '12 09:03

spanport


1 Answers

javascript:open_window_delivery("http://bibliotecadigitalhispanica.bne.es:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&forebear_coll=1333&user=GUEST&pds_handle=&pid=1673416&con_lng=SPA&rd_session=http://bibliotecadigitalhispanica.bne.es:80/R/7IUR42HNR5J19AY1Y3QJTL1P9M2AN81RCY4DRFE8JN5T22BI7I-03416");

That URL leads to a 302 page. If you follow it, you end up at a frame page, where the bottom frame is the content page.

http://bibliotecadigitalhispanica.bne.es///exlibris/dtl/d3_1/apache_media/L2V4bGlicmlzL2R0bC9kM18xL2FwYWNoZV9tZWRpYS8xNjczNDE2.pdf

(lib)curl can follow 302 pages.

Javascript isn't the problem so far. Then you are at single_viewer_toolbar2.jsp, where the function setLabelMetadataStream puts together the url for the pdf, before submitting that to its iframe "sendRequestIFrame".

I see 3 possibilities:

  1. the javascript-execution approach: High complexity, need to program lots of code, probably brittle
  2. Something based on a browser: Selenium is probably good. I know elinks2 has javascript support, and according to its wikipedia page it can be scripted in "Perl, Ruby, Lua and GNU Guile".
  3. Ask the web administrator for help. You should do this anyways to understand their policy/attitude towards bots. Perhaps they can provide you (and others) with an interface/API.

I recommend learning more about Selenium, it seems the easiest.

like image 139
j13r Avatar answered Sep 21 '22 16:09

j13r