Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python Requests - managing cookies

I'm trying to get some content automatically from a site using requests (and bs4)

I have a script that gets a cookie:

def getCookies(self):
    username = 'username'
    password = 'password'
    URL = 'logonURL'
    r = requests.get(URL, auth=('username', 'password'))
    cookies = r.cookies

dump of the cookies looks like:

<<class 'requests.cookies.RequestsCookieJar'>[<Cookie ASP.NET_SessionId=yqokjr55ezarqbijyrwnov45 for URL.com/>, <Cookie BIGipServerPE_Journals.lww.com_80=1440336906.20480.0000 for URL.com/>, <Cookie JournalsLockCookie=id=a5720750-3f20-4207-a500-93ae4389213c&ip=IP address for URL.com/>]>

But when I pass the cookie object to the next URL:

 soup = Soup(s.get(URL, cookies = cookies).content)

its not working out - I can see by dumping the soup that I'm not giving the webserver my credentials properly

I tried running a requests session:

def getCookies(self):
    self.s = requests.session()
    username = 'username'
    password = 'password'
    URL = 'logURL'
    r = self.s.get(URL, auth=('username', 'password'))

and I get the same no joy.

I looked at the header via liveHttp in FF when I visit the 2nd page, and see a very different form:

Cookie: WT_FPC=id=264b0aa85e0247eb4f11355304127862:lv=1355317068013:ss=1355314918680; UserInfo=Username=username; BIGipServerPE_Journals.lww.com_80=1423559690.20480.0000; PlatformAuthCookie=true; Institution=ReferrerUrl=http://logonURL.com/?wa=wsignin1.0&wtrealm=urn:adis&wctx=http://URL.com/_layouts/Authenticate.aspx?Source=%252fpecnews%252ftoc%252f2012%252f06440&token=method|ExpireAbsolute; counterSessionGuidId=6e2bd57f-b6da-4dd4-bcb0-742428e08b5e; MyListsRefresh=12/13/2012 12:59:04 AM; ASP.NET_SessionId=40a04p45zppozc45wbadah45; JournalsLockCookie=id=85d1f38f-dcbb-476a-bc2e-92f7ac1ae493&ip=10.204.217.84; FedAuth=77u/PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz48U2VjdXJpdHlDb250ZXh0VG9rZW4gcDE6SWQ9Il9mMGU5N2M3Zi1jNzQ5LTQ4ZjktYTUxNS1mODNlYjJiNGNlYzUtNEU1MDQzOEY0RTk5QURCNDFBQTA0Mjc0RDE5QzREMEEiIHhtbG5zOnAxPSJodHRwOi8vZG9jcy5vYXNpcy1vcGVuLm9yZy93c3MvMjAwNC8wMS9vYXNpcy0yMDA0MDEtd3NzLXdzc2VjdXJpdHktdXRpbGl0eS0xLjAueHNkIiB4bWxucz0iaHR0cDovL2RvY3Mub2FzaXMtb3Blbi5vcmcvd3Mtc3gvd3Mtc2VjdXJlY29udmVyc2F0aW9uLzIwMDUxMiI+PElkZW50aWZpZXI+dXJuOnV1aWQ6ZjJmNGY5MGItMmE4Yy00OTdlLTkwNzktY2EwYjM3MTBkN2I1PC9JZGVudGlmaWVyPjxJbnN0YW5jZT51cm46dXVpZDo2NzMxN2U5Ny1lMWQ3LTQ2YzUtOTg2OC05ZGJhYjA3NDkzOWY8L0luc3RhbmNlPjwvU2VjdXJpdHlDb250ZXh0VG9rZW4+

I have redacted the username, password, and URLS from the question for obvious reasons.

Am I missing something obvious? is there a different / proper way to capture the cookie - the current method I'm using is not working.

EDIT:

This is a self standing version of the sessioned code:

s = requests.session()
username = 'username'
password = 'password'
URL = 'logonURL.aspx'
r = s.get(URL, auth=('username', 'password'))
URL = r"URL.aspx"
soup = Soup(s.get(URL).content)

reading a dump of the soup, I can see in the html that its telling me I don't have access - this string only appears via browser when you're not logged in.

like image 594
Jay Gattuso Avatar asked Dec 13 '12 07:12

Jay Gattuso


1 Answers

I had a similar problem and found help in this question. The session jar was empty and to actually get the cookie I needed to use a session.

session = requests.session()
p = session.post("http://example.com", {'user':user,'password':password})
print 'headers', p.headers
print 'cookies', requests.utils.dict_from_cookiejar(session.cookies)
print 'html',  p.text
like image 104
arhuaco Avatar answered Sep 28 '22 05:09

arhuaco