Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

python-how to crawl past __VIEWSTATE

im implementing a simple python crawler. i tested on .aspx site and realised it didn't crawl past <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwUKLTc2MzAxM..." />

the value of __VIEWSTATE is super long. every html tags below were not crawled. this is my crawler:

try:
    # For python 3.0 and later
    from urllib.request import Request, urlopen, URLError
except ImportError:
    # Fall back to python 2's urllib2
    from urllib2 import Request, urlopen, URLError

from HTMLParser import HTMLParser

url = "http://tickets.cathay.com.sg/index.aspx"
response = urlopen(url)
html = response.read()

# Create a subclass and override the handler methods
class MetaParser(HTMLParser):
    def handle_starttag(self, tag, attrs):
        print "Encountered a start tag:", tag
        for attr in attrs:
            print("attr:",attr)

        if tag == "img":
            for attr in attrs:
                print attr


#instantiate the parser and fed it some HTML
parser = MetaParser()
parser.feed(html)

Here is the result of the above crawler:

Encountered a start tag: html
('attr:', ('xmlns', 'http://www.w3.org/1999/xhtml'))
Encountered a start tag: head
Encountered a start tag: title
Encountered a start tag: style
('attr:', ('type', 'text/css'))
Encountered a start tag: style
('attr:', ('type', 'text/css'))
Encountered a start tag: script
('attr:', ('language', 'javascript'))
('attr:', ('type', 'text/javascript'))
Encountered a start tag: body
Encountered a start tag: div
('attr:', ('id', 'div_loading'))
('attr:', ('style', 'display:none;'))
Encountered a start tag: b
Encountered a start tag: script
('attr:', ('language', 'javascript'))
('attr:', ('type', 'text/javascript'))
Encountered a start tag: div
('attr:', ('style', 'height:100%;width:100%;vertical-align:middle;text-align:center;'))
Encountered a start tag: br
Encountered a start tag: br
Encountered a start tag: table
('attr:', ('id', 'tbl_noJS'))
('attr:', ('cellpadding', '3'))
('attr:', ('cellspacing', '3'))
('attr:', ('class', 'asc_mb__Error'))
Encountered a start tag: tr
Encountered a start tag: th
Encountered a start tag: tr
Encountered a start tag: td
Encountered a start tag: script
('attr:', ('language', 'javascript'))
('attr:', ('type', 'text/javascript'))
Encountered a start tag: form
('attr:', ('name', 'aspnetForm'))
('attr:', ('method', 'post'))
('attr:', ('action', 'index.aspx'))
('attr:', ('id', 'aspnetForm'))
Encountered a start tag: input
('attr:', ('type', 'hidden'))
('attr:', ('name', '__VIEWSTATE'))
('attr:', ('id', '__VIEWSTATE'))
('attr:', ('value', '/wEPDwULLTExNjcwMjQ1OTIPFgIeD19fcG9zdGJhY2tjb3VudGYWAmYPZBYCAgMPZBYCAgMPZBYEZg8QZGQWAGQCAQ8QZGQWAGRk94h3o3llZzxioTZaZaEsGu8qYIM='))
Encountered a start tag: script
('attr:', ('type', 'text/javascript'))

If you noticed, the value of viewstate is similar but not the same as the one found in the browser Page View Source. The other attributes also seems different.

I found an example at here but it didn't work. i googled and couldnt find much about it

I investigated further and tried crawling http://www.microsoft.com/en-sg/default.aspx. It works!!! In View Page Source, I see it has __VIEWSTATE. I'm puzzled. So why did my crawler failed to crawl http://tickets.cathay.com.sg/index.aspx??

Here is another crawler using scrapy:

from scrapy.spider import Spider
from scrapy.selector import Selector

class MySpider(Spider):
    name = "myspider"

    start_urls = [
        "http://tickets.cathay.com.sg/index.aspx"
    ]

    def parse(self, response):
        filename = response.url.split("/")[-2]
        print "filename[", filename, "]"
        open(filename, 'wb').write(response.body)

        sel = Selector(response)

        # Using XPath query
        print sel.xpath('//img')

Here is the result:

User-MacBook-Pro:tutorial User$ scrapy crawl myspider
2014-06-20 23:52:10+0800 [scrapy] INFO: Scrapy 0.22.2 started (bot: tutorial)
2014-06-20 23:52:10+0800 [scrapy] INFO: Optional features available: ssl, http11
2014-06-20 23:52:10+0800 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'}
2014-06-20 23:52:10+0800 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-06-20 23:52:10+0800 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-06-20 23:52:10+0800 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-06-20 23:52:10+0800 [scrapy] INFO: Enabled item pipelines: 
2014-06-20 23:52:10+0800 [myspider] INFO: Spider opened
2014-06-20 23:52:10+0800 [myspider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-06-20 23:52:10+0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-06-20 23:52:10+0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-06-20 23:52:11+0800 [myspider] DEBUG: Crawled (200) <GET http://tickets.cathay.com.sg/index.aspx> (referer: None)
filename[ tickets.cathay.com.sg ]
[]
2014-06-20 23:52:11+0800 [myspider] INFO: Closing spider (finished)
2014-06-20 23:52:11+0800 [myspider] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 230,
     'downloader/request_count': 1,
     'downloader/request_method_count/GET': 1,
     'downloader/response_bytes': 1856,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2014, 6, 20, 15, 52, 11, 9068),
     'log_count/DEBUG': 3,
     'log_count/INFO': 7,
     'response_received_count': 1,
     'scheduler/dequeued': 1,
     'scheduler/dequeued/memory': 1,
     'scheduler/enqueued': 1,
     'scheduler/enqueued/memory': 1,
     'start_time': datetime.datetime(2014, 6, 20, 15, 52, 10, 960574)}
2014-06-20 23:52:11+0800 [myspider] INFO: Spider closed (finished)

With scrapy which i believed its using lxml, it couldnt crawl anything below __VIEWSTATE too.

like image 650
chrizonline Avatar asked Jun 20 '14 14:06

chrizonline


1 Answers

Here is a working example using requests and beautifulsoup4 (I don't know enough about scapy to do it using that).

import requests
from bs4 import BeautifulSoup

def get_viewstate():
    url = "http://tickets.cathay.com.sg/index.aspx"
    req = requests.get(url)
    data = req.text

    bs = BeautifulSoup(data)
    return bs.find("input", {"id": "__VIEWSTATE"}).attrs['value']

url = "http://tickets.cathay.com.sg/index.aspx"
data = {"__VIEWSTATE": get_viewstate()}
req = requests.post(url, data)

bs = BeautifulSoup(req.text)
print bs.findAll("td", {"class": "movieTitlePlatinum"}) #Just an example, you could also do bs.findAll("img") etc.
like image 155
scandinavian_ Avatar answered Oct 06 '22 16:10

scandinavian_