I have a function that makes a GET request to the JIRA REST API to pull down the JSON object of a JIRA ticket.
It uses the requests module. But I don't want to use it anymore because anytime I want someone else to run my script, they need to jump through fiery hoops to get the requests module because they are behind a corporate proxy and they don't have the time to put in the extra work.
So instead of asking people to do something they don't have the patience or time to do, I'd much rather replace the requests module with something else so that this script is more of an out-of-the-box solution.
Here's what I have now:
import requests
import pprint
def pull_jira_info(jira, user, pw, url):
"""
Arguments:
jira - the JIRA issue number
url - the first part of the JIRA server url
user - JIRA username
pw - JIRA password
"""
url += '/rest/api/2/issue/' + jira
r = requests.get(url, auth=(user, pw), verify=False)
jira_info = r.json()
pprint.pprint(jira_info)
return jira_info
My guess is that I can do a simple swap of the requests module with some other module (built into python) that does GET requests without having to change too much code.
Does anyone have a simple go-to for this that isn't the requests module? What would that look like when implemented in the current function I have?
Libraries in Python to make HTTP Request There are many libraries to make an HTTP request in Python, which are httplib, urllib, httplib2 , treq, etc., but requests are the simplest and most well-documented libraries among them all.
In Python, the most common library for making requests and working with APIs is the requests library. The requests library isn't part of the standard Python library, so you'll need to install it to get started.
get is that requests fetches the whole body of the response at once and remembers it, but aiohttp doesn't. aiohttp lets you ignore the body, or read it in chunks, or read it after looking at the headers/status code. That's why you need to do a second await : aiohttp needs to do more I/O to get the response body.
Here's what I came up with. It works perfectly!
import base64
import json
import pprint
import urllib2
def auth_str(user, pw):
"""
Arguments:
user - (Required) jira username
pw - (Required) jira password
"""
auth = base64.b64encode('%s:%s' % (user, pw))
return auth
def pull_jira(jira, user, pw, url):
"""
Arguments:
jira - (Required) jira issue number
user - (Required) jira username
pw - (Required) jira password
url - (Required) jira server url
"""
url += '/rest/api/2/issue/' + jira
r = urllib2.Request(url)
r.add_header('Authorization', 'Basic %s' % auth_str(user, pw))
r.add_header('Content-Type', 'application/json')
jira_data = json.load(urllib2.urlopen(r))
pprint.pprint(jira_data)
return jira_data
Yes, there are some things I could do to clean this up, but I just wanted to go ahead and post a working update showing how I replaced the requests module with urllib2.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With