I am familiar with how to use the Google Chrome Web Inspector to manually save a webpage as a HAR file with the content. I would like to automate this.
In my searches for tools to automate the generation of a HAR file, I have found some solutions, but none of them save the content of the resources.
I have tried the following without any luck:
Getting the content of the page you requested (the raw HTML) is doable, but getting the content of every other network resource that loads (CSS, javascript, images, etc) is what my problem is.
I think the most reliable way to automate generating HAR is using BrowsermobProxy along with chromedriver and Selenium.
Here is a script in python to programatically generate HAR file which can be integrated in your development cycle. It also captures content.
from browsermobproxy import Server
from selenium import webdriver
import os
import json
import urlparse
server = Server("path/to/browsermob-proxy")
server.start()
proxy = server.create_proxy()
chromedriver = "path/to/chromedriver"
os.environ["webdriver.chrome.driver"] = chromedriver
url = urlparse.urlparse (proxy.proxy).path
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--proxy-server={0}".format(url))
driver = webdriver.Chrome(chromedriver,chrome_options =chrome_options)
proxy.new_har("http://stackoverflow.com", options={'captureHeaders': True,'captureContent':True})
driver.get("http://stackoverflow.com")
result = json.dumps(proxy.har, ensure_ascii=False)
print result
proxy.stop()
driver.quit()
You can also checkout this tool which generates HAR and NavigationTiming data from both Chrome and Firefox headlessly: Speedprofile
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With