I am trying to use twitter search web service in python. I want to call a web service like:
http://search.twitter.com/search.json?q=blue%20angels&rpp=5&include_entities=true&result_type=mixed
from my python program.
Can anybody tell me
how to use xmlhttprequst object in python
how to pass parameters to it, and
how to get the data in dictionary.
Here is my try:
import urllib
import sys
url = "http://search.twitter.com/search.json?q=blue%20angels&rpp=5&include_entities=true&result_type=mixed"
urlobj = urllib.urlopen(url)
data = urlobj.read()
print data
Thanks.
Import Required Python Libraries for Asynchronous RequestsThe asyncio library is a native Python library that allows us to use async and await in Python. These are the basics of asynchronous requests. The other library we'll use is the `json` library to parse our responses from the API.
Python code runs at exactly the same speed whether it is written in sync or async style. Aside from the code, there are two factors that can influence the performance of a concurrent application: context-switching and scalability.
To run an async function (coroutine) you have to call it using an Event Loop. Event Loops: You can think of Event Loop as functions to run asynchronous tasks and callbacks, perform network IO operations, and run subprocesses. Example 1: Event Loop example to run async Function to run a single async function: Python3.
You don't need "asynchronous httprequest" to use twitter search api:
import json
import urllib
import urllib2
# make query
query = urllib.urlencode(dict(q="blue angel", rpp=5, include_entities=1,
result_type="mixed"))
# make request
resp = urllib2.urlopen("http://search.twitter.com/search.json?" + query)
# make dictionary (parse json response)
d = json.load(resp)
There are probably several libraries that provide a nice OO interface around these http requests.
To make multiple requests concurrently you could use gevent
:
import gevent
import gevent.monkey; gevent.monkey.patch_all() # patch stdlib
import json
import urllib
import urllib2
def f(querystr):
query = urllib.urlencode(dict(q=querystr, rpp=5, include_entities=1,
result_type="mixed"))
resp = urllib2.urlopen("http://search.twitter.com/search.json?" + query)
d = json.load(resp)
print('number of results %d' % (len(d['results']),))
jobs = [gevent.spawn(f, q) for q in ['blue angel', 'another query']]
gevent.joinall(jobs) # wait for completion
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With