Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

urllib.urlencode doesn't like unicode values: how about this workaround?

People also ask

What does Urllib parse Urlencode do?

parse. urlencode() method can be used for generating the query string of a URL or data for a POST request.

What is Urllib Urlencode?

You can encode multiple parameters at once using urllib. parse. urlencode() function. This is a convenience function which takes a dictionary of key value pairs or a sequence of two-element tuples and uses the quote_plus() function to encode every value.


You should indeed be nervous. The whole idea that you might have a mixture of bytes and text in some data structure is horrifying. It violates the fundamental principle of working with string data: decode at input time, work exclusively in unicode, encode at output time.

Update in response to comment:

You are about to output some sort of HTTP request. This needs to be prepared as a byte string. The fact that urllib.urlencode is not capable of properly preparing that byte string if there are unicode characters with ordinal >= 128 in your dict is indeed unfortunate. If you have a mixture of byte strings and unicode strings in your dict, you need to be careful. Let's examine just what urlencode() does:

>>> import urllib
>>> tests = ['\x80', '\xe2\x82\xac', 1, '1', u'1', u'\x80', u'\u20ac']
>>> for test in tests:
...     print repr(test), repr(urllib.urlencode({'a':test}))
...
'\x80' 'a=%80'
'\xe2\x82\xac' 'a=%E2%82%AC'
1 'a=1'
'1' 'a=1'
u'1' 'a=1'
u'\x80'
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
  File "C:\python27\lib\urllib.py", line 1282, in urlencode
    v = quote_plus(str(v))
UnicodeEncodeError: 'ascii' codec can't encode character u'\x80' in position 0: ordinal not in range(128)

The last two tests demonstrate the problem with urlencode(). Now let's look at the str tests.

If you insist on having a mixture, then you should at the very least ensure that the str objects are encoded in UTF-8.

'\x80' is suspicious -- it is not the result of any_valid_unicode_string.encode('utf8').
'\xe2\x82\xac' is OK; it's the result of u'\u20ac'.encode('utf8').
'1' is OK -- all ASCII characters are OK on input to urlencode(), which will percent-encode such as '%' if necessary.

Here's a suggested converter function. It doesn't mutate the input dict as well as returning it (as yours does); it returns a new dict. It forces an exception if a value is a str object but is not a valid UTF-8 string. By the way, your concern about it not handling nested objects is a little misdirected -- your code works only with dicts, and the concept of nested dicts doesn't really fly.

def encoded_dict(in_dict):
    out_dict = {}
    for k, v in in_dict.iteritems():
        if isinstance(v, unicode):
            v = v.encode('utf8')
        elif isinstance(v, str):
            # Must be encoded in UTF-8
            v.decode('utf8')
        out_dict[k] = v
    return out_dict

and here's the output, using the same tests in reverse order (because the nasty one is at the front this time):

>>> for test in tests[::-1]:
...     print repr(test), repr(urllib.urlencode(encoded_dict({'a':test})))
...
u'\u20ac' 'a=%E2%82%AC'
u'\x80' 'a=%C2%80'
u'1' 'a=1'
'1' 'a=1'
1 'a=1'
'\xe2\x82\xac' 'a=%E2%82%AC'
'\x80'
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
  File "<stdin>", line 8, in encoded_dict
  File "C:\python27\lib\encodings\utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: invalid start byte
>>>

Does that help?


I had the same problem with German "Umlaute". The solution is pretty simple:

In Python 3+, urlencode allows to specify the encoding:

from urllib import urlencode
args = {}
args = {'a':1, 'en': 'hello', 'pt': u'olá'}
urlencode(args, 'utf-8')

>>> 'a=1&en=hello&pt=ol%3F'

Seems like it is a wider topic than it looks, especially when you have to deal with more complex dictionary values. I found 3 ways of solving the problem:

  1. Patch urllib.py to include encoding parameter:

    def urlencode(query, doseq=0, encoding='ascii'):
    

    and replace all str(v) conversions to something like v.encode(encoding)

    Obviously not good, since it's hardly redistributable and even harder to maintain.

  2. Change default Python encoding as described here. The author of the blog pretty clearly describes some problems with this solution and who knows how more of them could be lurking in the shadows. So it doesn't look good to me either.

  3. So I, personally, ended up with this abomination, which encodes all unicode strings to UTF-8 byte strings in any (reasonably) complex structure:

    def encode_obj(in_obj):
    
        def encode_list(in_list):
            out_list = []
            for el in in_list:
                out_list.append(encode_obj(el))
            return out_list
    
        def encode_dict(in_dict):
            out_dict = {}
            for k, v in in_dict.iteritems():
                out_dict[k] = encode_obj(v)
            return out_dict
    
        if isinstance(in_obj, unicode):
            return in_obj.encode('utf-8')
        elif isinstance(in_obj, list):
            return encode_list(in_obj)
        elif isinstance(in_obj, tuple):
            return tuple(encode_list(in_obj))
        elif isinstance(in_obj, dict):
            return encode_dict(in_obj)
    
        return in_obj
    

    You can use it like this: urllib.urlencode(encode_obj(complex_dictionary))

    To encode keys also, out_dict[k] can be replaced with out_dict[k.encode('utf-8')], but it was a bit too much for me.


It seems that you can't pass a Unicode object to urlencode, so, before calling it, you should encode every unicode object parameter. How you do this in a proper way seems to me very dependent on the context, but in your code you should always be aware of when to use the unicode python object (the unicode representation) and when to use the encoded object (bytestring).

Also, encoding the str values is "superfluous": What is the difference between encode/decode?