Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to find all the JavaScript requests made from my browser when I'm accessing a site

I want to scrape the contents of LinkedIn using requests and bs4 but I'm facing a problem with the JavaScript that is loading the page after I sign in(I don't get the home page directly), I don't wanna use Selenium

here is my code

import requests
from bs4 import BeautifulSoup

class Linkedin():
    def __init__(self, url ):
        self.url = url
        self.header = {"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) "
                                 "AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36"}

    def saveRsulteToHtmlFile(self, nameOfFile=None):
        if nameOfFile == None:
            nameOfFile ="Linkedin_page"
        with open(nameOfFile+".html", "wb") as file:
            file.write(self.response.content)

    def getSingInPage(self):
        self.sess = requests.Session()
        self.response = self.sess.get(self.url, headers=self.header)
        soup = BeautifulSoup(self.response.content, "html.parser")
        self.csrf = soup.find(attrs={"name" : "loginCsrfParam"})["value"]

    def connecteToMyLinkdin(self):
        self.form_data = {"session_key": "[email protected]",
                     "loginCsrfParam": self.csrf,
                     "session_password": "mypassword"}
        self.url = "https://www.linkedin.com/uas/login-submit"
        self.response = self.sess.post(self.url, headers=self.header, data=self.form_data)


    def getAnyPage(self,url):
        self.response = self.sess.get(url, headers=self.header)




url = "https://www.linkedin.com/"

likedin_page = Linkedin(url)
likedin_page.getSingInPage()
likedin_page.connecteToMyLinkdin() #I'm connected but java script still loading 
likedin_page.getAnyPage("https://www.linkedin.com/jobs/")
likedin_page.saveRsulteToHtmlFile()

I want help to pass the javascript loads without using Selenium...

like image 974
Ali Avatar asked Dec 18 '22 15:12

Ali


2 Answers

Although it's technically possible to simulate all the calls from Python, at a dynamic page like LinkedIn, I think it will be quite tedious and brittle.

Anyway, you'd open "developer tools" in your browser before you open LinkedIn and see how the traffic looks like. You can filter for the requests from Javascript (in Firefox, the filter is called XHR).

network tab in developer tools

You would then simulate the necessary/interesting requests in your code. The benefit is the servers usually return structured data to Javascript, such as JSON. Therefore you won't need to do as much HTML parsing.

If you find not progressing very much this way (it really depends on the particular site), then you will probably have to use Selenium or some alternative such as:

  • https://robotframework.org/
  • https://miyakogi.github.io/pyppeteer/ (port of Puppeteer to Python)
like image 125
Krab Avatar answered Dec 28 '22 07:12

Krab


You should send all the XHR and JS requests manually [in the same session which you created during login]. Also, pass all the fields in request headers (copy from the network tools).

self.header_static = {
        'authority': 'static-exp2.licdn.com',
        'method': 'GET',
        'path': '/sc/h/c356usw7zystbud7v7l42pz0s',
        'scheme': 'https',
        'accept': '*/*',
        'accept-encoding': 'gzip, deflate, br',
        'accept-language': 'en-GB,en;q=0.9,en-US;q=0.8,hi;q=0.7,la;q=0.6',
        'cache-control': 'no-cache',
        'dnt': '1',
        'pragma': 'no-cache',
        'referer': 'https://www.linkedin.com/jobs/',
        'sec-fetch-mode': 'no-cors',
        'sec-fetch-site': 'cross-site',
        'user-agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Mobile Safari/537.36'
        }

def postConnectionRequests(self):
        urls = [
        "https://static-exp2.licdn.com/sc/h/62mb7ab7wm02esbh500ajmfuz",
        "https://static-exp2.licdn.com/sc/h/mpxhij2j03tw91bpplja3u9b",
        "https://static-exp2.licdn.com/sc/h/3nq91cp2wacq39jch2hz5p64y",
        "https://static-exp2.licdn.com/sc/h/emyc3b18e3q2ntnbncaha2qtp",
        "https://static-exp2.licdn.com/sc/h/9b0v30pbbvyf3rt7sbtiasuto",
        "https://static-exp2.licdn.com/sc/h/4ntg5zu4sqpdyaz1he02c441c",
        "https://static-exp2.licdn.com/sc/h/94cc69wyd1gxdiytujk4d5zm6",
        "https://static-exp2.licdn.com/sc/h/ck48xrmh3ctwna0w2y1hos0ln",
        "https://static-exp2.licdn.com/sc/h/c356usw7zystbud7v7l42pz0s",
        ]

        for url in urls:
            self.sess.get(url,headers=self.header_static)
            print("REQUEST SENT TO "+url)

I called the postConnectionRequests() function after before saving the HTML content, and received the complete page. Hope this helps.

like image 34
ScriptKiddieOnAComputer Avatar answered Dec 28 '22 07:12

ScriptKiddieOnAComputer