I am trying to port a script to python 3 that submits XML feeds found here:
https://developers.google.com/search-appliance/documentation/files/pushfeed_client.py.txt
After running 2to3.py and making a few minor adjustments to remove any syntax errors the script fails with this:
(py33dev) d:\dev\workspace>python pushfeed_client.py --datasource="TEST1" --feedtype="full" --url="http://gsa:19900/xmlfeed" --xmlfilename="test.xml"
Traceback (most recent call last):
File "pushfeed_client.py", line 108, in <module>
main(sys.argv)
File "pushfeed_client.py", line 56, in main
result = urllib.request.urlopen(request_url)
File "C:\Python33\Lib\urllib\request.py", line 156, in urlopen
return opener.open(url, data, timeout)
File "C:\Python33\Lib\urllib\request.py", line 469, in open
response = self._open(req, data)
File "C:\Python33\Lib\urllib\request.py", line 487, in _open
'_open', req)
File "C:\Python33\Lib\urllib\request.py", line 447, in _call_chain
result = func(*args)
File "C:\Python33\Lib\urllib\request.py", line 1268, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "C:\Python33\Lib\urllib\request.py", line 1253, in do_open
r = h.getresponse()
File "C:\Python33\Lib\http\client.py", line 1147, in getresponse
response.begin()
File "C:\Python33\Lib\http\client.py", line 358, in begin
version, status, reason = self._read_status()
File "C:\Python33\Lib\http\client.py", line 340, in _read_status
raise BadStatusLine(line)
http.client.BadStatusLine: <!DOCTYPE html>
Why is it returning that exception with the response from the server? Here's the full response from the GSA when I sniffed the session:
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 400 (Bad Request)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}
</style>
<a href=//www.google.com/><img src=//www.google.com/images/errors/logo_sm.gif alt=Google></a>
<p><b>400.</b> <ins>That’s an error.</ins>
<p>Your client has issued a malformed or illegal request. <ins>That’s all we know.</ins>
And it did return an HTTP 400. I can reliably cause this issue whenever the XML payload has a utf-8 character in it. It works flawlessly when it's plain ascii. Here's the most basic version of code I can use to reliably recreate the issue:
import http.client
http.client.HTTPConnection.debuglevel = 1
with open("GSA_full_Feed.xml", encoding='utf-8') as xdata:
payload = xdata.read()
content_length = len(payload)
feed_path = "xmlfeed"
content_type = "multipart/form-data; boundary=----------boundary_of_feed_data$"
headers = {"Content-type": content_type, "Content-length": content_length}
conn = http.client.HTTPConnection("gsa", 19900)
conn.request("POST", feed_path, body=payload.encode("utf-8"), headers=headers)
res = conn.getresponse()
print(res.read())
conn.close()
And here's a sample XML payload that is used to cause the exception:
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE gsafeed PUBLIC "-//Google//DTD GSA Feeds//EN" "gsafeed.dtd">
<gsafeed>
<header>
<datasource>TEST1</datasource>
<feedtype>full</feedtype>
</header>
<group>
<record action="add" mimetype="text/html" url="https://myschweetassurl.com">
<metadata>
<meta content="shit happens, then you die" name="description"/>
</metadata>
<content>wacky Umläut test of non utf-8 characters</content>
</record>
</group>
</gsafeed>
The only delta I can find between the 2 and 3 version are the content-length headers on each request. The Python 3 version is consistently shorter than the 2 version, 870 vs. 873.
After lots of wiresharking, I figured out the cause, and solution, of the problem is the way the content-length header was being set. In my Python 3 port of the script, I copied over the method that set the content-length. Which is this:
headers['Content-length']=str(len(body))
That is incorrect! The correct way would be this:
headers['Content-length']=str(len(bytes(body, 'utf-8')))
Because the payload must be a bytes object. When you bytes encode it, the length is different than the string version.
return urllib.request.Request(theurl, bytes(body, 'utf-8'), headers)
You can safely omit manually setting the content-length header when using anything that derives from http.client.HTTPConnection. It has an internal method that checks for the content-length header, and if it's missing, set it based on the length of the content body, regardless of form.
The issue was a translation but subtle difference between Python 2 and 3 and how it handles strings and encodes them. It must've been some kind of fluke that the regular ASCII version worked when the utf-8 version didn't, oh well.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With