Here is a python script that loads a url and captures response time:
import urllib2
import time
opener = urllib2.build_opener()
request = urllib2.Request('http://example.com')
start = time.time()
resp = opener.open(request)
resp.read()
ttlb = time.time() - start
Since my timer is wrapped around the whole request/response (including read()), this will give me the TTLB (time to last byte).
I would also like to get the TTFB (time to first byte), but am not sure where to start/stop my timing. Is urllib2 granular enough for me to add TTFB timers? If so, where would they go?
you should use pycurl, not urllib2
install pyCurl:
you can use pip / easy_install, or install it from source.
easy_install pyCurl
maybe you should be a superuser.
usage:
import pycurl
import sys 
import json
WEB_SITES = sys.argv[1]
def main():
    c = pycurl.Curl()
    c.setopt(pycurl.URL, WEB_SITES)              #set url
    c.setopt(pycurl.FOLLOWLOCATION, 1)  
    content = c.perform()                        #execute 
    dns_time = c.getinfo(pycurl.NAMELOOKUP_TIME) #DNS time
    conn_time = c.getinfo(pycurl.CONNECT_TIME)   #TCP/IP 3-way handshaking time
    starttransfer_time = c.getinfo(pycurl.STARTTRANSFER_TIME)  #time-to-first-byte time
    total_time = c.getinfo(pycurl.TOTAL_TIME)  #last requst time
    c.close()
    data = json.dumps({'dns_time':dns_time,         
                       'conn_time':conn_time,        
                       'starttransfer_time':starttransfer_time,    
                       'total_time':total_time})
    return data
if __name__ == "__main__":    
    print main()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With