I'm attempting to programmatically submit some data to a form on our company's admin page rather than doing it by hand.
I've written numerous other tools which scrape this website and manipulate data. However, for some reason, this particular one is giving me a ton of trouble.
Below are the pages I'm attempting to scrape and post data to. Note, that these pages usually show up in js shadowboxes, however, it functions fine with Javascript disabled, so I'm assuming that javascript is not an issue with regards to the scraper trouble.
(Note, since this is a company page, I've filled I've replaced all the form fields with junk titles, so, for instance, the client numbers are completely made-up)
Also, being that it is a company page behind a username/password wall, I can't give out the website for testing, so I've attempted in inject as much detail as possible into this post!
Main entry point is here:
From this page, I click "Add New form"
, which opens this next page in a new tag (since javascript is disabled).
On this page, I fill out the small form, click submit, which then gets the next page displaying a success message.
Should be simple, right?
import mechanize
import base64
import cookielib
br = mechanize.Browser()
username = 'USERNAME'
password = 'PASSWORD'
br.addheaders.append(('Authorization',
'Basic %s' % base64.encodestring('%s:%s' % (username, password))))
br.addheaders = [('User-agent',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.22 (KHTML,'
' like Gecko) Chrome/25.0.1364.172 Safari/537.22')]
br.open('www.our_company_page.com/adm/add_forms.php')
links = [link for link in br.links()]
# Follow "Add a form" Link
response = br.follow_link(links[0])
br.select_form(nr=0)
br.form.set_all_readonly(False)
br.form['formNumber'] = "FROM_PYTHON"
br.form['RevisionNumber'] = ['20']
br.form['FormType'] = ['H(num)']
response = br.submit()
print response.read() #Shows the exact same page! >:(
So, as you can see, I attempt to duplicate the steps that I would take in a browser. I load the initial /adm/forms
page, follow the first link, which is Add a Form
, and fill out the form, and click the submit
button. But here's where it get screwy. The response that mechanize returns is the exact same page with the form. No error messages, no success messages, and when I manually check our admin page, no changes have been made.
Frustrated, I opened Chrome and watched the network tab as I manually filed out and submitted the form in the browser.
Upon submitting the form, this is the network activity:
Seems pretty straight forward to me. There's the post
, and then a get
for the css files, and another get
for the jquery library. There's another get
for some kind of image, but I have no idea what that is for.
After some Googling about scraping problems, I saw a suggestion that the server may be expecting a certain header, and the I should simply copy everything that gets made in the POST request and then slowly take away headers until I figure out which one was the important one. So I did just that, copied every bit of information in the Network tab and stuck in my post request.
I had some trouble figuring out all of the header stuff with Mechanize
, so I switched over to urllib2.
import urllib
import urllib2
import base64
url = 'www.our_company_page.com/adm/add_forms.php'
values = {
'SID':'', #Hidden field
'FormNumber':'FROM_PYTHON1030PM',
'RevisionNumber':'5',
'FormType':'H(num)',
'fsubmit':'Save Page'
}
username = 'USERNAME'
password = 'PASSWORD'
headers = {
'Accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset' : 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding' : 'gzip,deflate,sdch',
'Accept-Language' : 'en-US,en;q=0.8',
'User-Agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)',
'Authorization': 'Basic %s' % base64.encodestring('%s:%s' % (username, password)),
'Cache-Control' : 'max-age=0',
'Connection' : 'keep-alive',
'Content-Type' : 'application/x-www-form-urlencoded',
'Cookie' : 'ID=201399',
'Host' : 'our_company_page.com',
'Origin' : 'http://our_company_page.com',
'Referer' : 'http://our_company_page.com/adm/add_form.php',
'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.31 (KHTML, '
'like Gecko) Chrome/26.0.1410.43 Safari/537.31'
}
data = urllib.urlencode(values)
req = urllib2.Request(url, data, headers)
response = urllib2.urlopen(req)
print response.read()
As you can see, I added header present in Chrome's Network tab to the POST request in urllib2
.
One addition change from the Mechainze version is that I now access the add_form.php
page directly by adding the relevant cookie to my Request.
However, even with duplication everything I can, I still have the exact same issue: The response is the exact same page I started on -- no errors, no success messages, no changes on the server, just returned to a blank form.
Time to do some traffic sniffing. I'm determined to see WTF is going on in this magical post request!
I download, install, and fire up Wireshark. I filter for http
, and then first submit the form manually in the browser, and then run my code with attempts to submit the form programmatically.
This is the network traffic:
Aside from the headers being in a slightly different order (does that matter), they look exactly the same!
So that's where I am, completely confused as to why a post
request, which is (as far as I can tell) nearly identical to the one made by the browser, isn't making any changes on the server.
Has anyone ever encountered anything like this? Am I missing something obvious? What's going on here?
As per Ric's suggestion, I replicated the POST
data exactly. I copies it directly from the Network Source tab in Chrome.
Modified code looks as follows
data = 'SegmentID=&Segment=FROMPYTHON&SegmentPosition=1&SegmentContains=Sections&fsubmit=Save+Page'
req = urllib2.Request(url, data, headers)
response = urllib2.urlopen(req)
print response.read()
The only thing I changed was the Segment
value from FROMBROWSER
to FROMPYTHON
.
Unfortunately, this still yields the same result. The response is the same page, I started from.
I checked out the requests
library, duplicated my efforts using their API, and lo' magically it worked! The POST actually went through. The question remains: why!? I again took another snapshot with wireshark, and as near as I can tell it is exactly the same as the POST made from the browser.
def post(eventID, name, pos, containsID):
segmentContains = ["Sections", "Products"]
url = 'http://my_site.com/adm/add_page.php'
cookies = dict(EventID=str(eventID))
payload = { "SegmentID" : "",
"FormNumber" : name,
"RevisionNumber" : str(pos),
"FormType" : containsID,
"fsubmit" : "Save Page"
}
r = requests.post(
url,
auth=(auth.username, auth.password),
allow_redirects=True,
cookies=cookies,
data=payload)
So, to summarize the current state of the question. It works, but I nothing has really changed. I have no idea why attempts with both Mechanize and urllib2 failed. What is going on that allows that requests
POST to actually go through?
At Wing Tand Wongs
suggestion, I created a cookie handler, and attached that to the urllib.opener
. So no more cookies are being send manually in the headers -- in fact, I don't assign anything at all now.
I first connect to the adm page with has the link to the form, rather than connecting to the form right away.
'http://my_web_page.com/adm/segments.php?&n=201399'
This gives the ID
cookie to my urllib
cookieJar
. From this point I follow the link to the page that has the form, and then attempt to submit to it as usual.
url = 'http://my_web_page.com/adm/segments.php?&n=201399'
post_url = 'http://my_web_page.com/adm/add_page.php'
values = {
'SegmentID':'',
'Segment':'FROM_PYTHON1030PM',
'SegmentPosition':'5',
'SegmentContains':'Products',
'fsubmit':'Save Page'
}
username = auth.username
password = auth.password
headers = {
'Accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset' : 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding' : 'gzip,deflate,sdch',
'Accept-Language' : 'en-US,en;q=0.8',
'User-Agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)',
'Authorization': 'Basic %s' % base64.encodestring('%s:%s' % (username, password)),
'Cache-Control' : 'max-age=0',
'Connection' : 'keep-alive',
'Content-Type' : 'application/x-www-form-urlencoded',
'Host' : 'mt_site.com',
'Origin' : 'http://my_site.com',
'Referer' : 'http://my_site.com/adm/add_page.php',
'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.43 Safari/537.31'
}
COOKIEFILE = 'cookies.lwp'
cj = cookielib.LWPCookieJar()
if os.path.isfile(COOKIEFILE):
cj.load(COOKIEFILE)
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)
data = urllib.urlencode(values)
req = urllib2.Request(url, headers=headers)
handle = urllib2.urlopen(req)
req = urllib2.Request(post_url, data, headers)
handle = urllib2.urlopen(req)
print handle.info()
print handle.read()
print
if cj:
print 'These are the cookies we have received so far :'
for index, cookie in enumerate(cj):
print index, ' : ', cookie
cj.save(COOKIEFILE)
Same thing as before. No changes get made on the server. To verify that the cookies are indeed there, I print them to the console after submitting the form, which gives the output:
These are the cookies we have received so far :
<Cookie EventID=201399 for my_site.com/adm>
So, the cookie is there, and has been sent along side the request.. so still not sure what's going on.
Read and re-read your post and the other folks answers a few times. My thoughts:
When you implemented in mechanize and urllib2, it looks like the cookies were hard coded into the header response. This would most likely cause the form to kick you out.
When you switched to using the web broswer and using the python 'requests' library, the cookies and sessions handling was being taken care of behind the scenes.
I believe that if you change your code to take into account the cookie and session states, ie. presume an automated session at start, has an empty cookie for the site and no session data, but properly tracks and manages it during the session, it should work.
Simple copying and substituting the header data will not work, and a properly coded site should bounce you back to the beginning.
Without seeing the backend code for the website, the above is my observation. Cookies and Session data are the culprit.
Edit:
Found this link: http://docs.python-requests.org/en/latest/
Which describes accessing a site with authentication/etc. The format of the authentication is similar to the Requests implementation you are using. They link to a git source for a urllib2 implementation that does the same thing and I noticed that the authentication bits are different from how you are doing the auth bits:
https://gist.github.com/kennethreitz/973705
from the page:
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(None, gh_url, 'user', 'pass')
auth_manager = urllib2.HTTPBasicAuthHandler(password_manager)
opener = urllib2.build_opener(auth_manager)
I wonder if you change the way you are implementing the authentication bits for the urllib2 implementation, that it would work.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With