I have a bytearray that I need to use as a key to a dictionary. Ideally I'd like to do this without doing a copy of memory the size of the bytearray. Is there anyway to do this? Basically,
b = some bytearray
d[byte(b)] = x
Is there any faster way to do this? byte(b) is an O(len(bytearray)) operation which is undesirable.
An object is hashable if [1] it has a hash value which never changes during its lifetime (it needs a __hash__() method), and can be compared to other objects (it needs an __eq__() method). [2] Hashable objects which compare equal must have the same hash value. In Python, integers, floats, and bools are all immutable.
All immutable built-in objects in Python are hashable like tuples while the mutable containers like lists and dictionaries are not hashable. Objects which are instances of the user-defined class are hashable by default, they all compare unequal, and their hash value is their id().
Hashability makes an object usable as a dictionary key and a set member, because these data structures use the hash value internally. All of Python's immutable built-in objects are hashable, while no mutable containers (such as lists or dictionaries) are.
An object's hash value should never change But the hash value of an object should never change over the lifetime of that object. This is the reason that most mutable objects aren't hashable.
Any hash algorithm that actually does its job correctly will use O(len(b)) time. So the answer to "is there any faster way to do this" is no.
If your actual concern is memory usage, then you could, in principle, add a __hash__
method to a subclass of bytearray. But that's a pretty bad idea. Look what happens:
>>> class HashableBytearray(bytearray):
... def __hash__(self):
... return hash(str(self))
...
>>> h = HashableBytearray('abcd')
>>> hash(h)
-2835746963027601024
>>> h[2] = 'z'
>>> hash(h)
-2835746963002600949
So the same object could hash to two different spots in the dictionary, which isn't supposed to happen. And it gets worse:
>>> d = dict()
>>> hb1 = HashableBytearray('abcd')
>>> hb2 = HashableBytearray('abcd')
>>> d[hb1] = 0
>>> d[hb2] = 1
>>> d
{bytearray(b'abcd'): 1}
Ok, so far, so good. The values are equal, so there should be only one item in the dictionary. Everything is working as expected. Now let's see what happens when we change hb1
:
>>> hb1[2] = 'z'
>>> d[hb2] = 2
>>> d
{bytearray(b'abzd'): 1, bytearray(b'abcd'): 2}
See how even though hb2
didn't change at all, it created a new key-value pair in the dictionary this time?
Every time I passed a key to d
, that key was equal to 'abcd'
. But because the value of the first key changed after being added to the dictionary, Python couldn't tell that the value of the new key was the same as the old key had been when it was added. Now there are two key-value pairs in the dictionary, when there should be only one.
This is only one of many ways that using mutable values as keys can lead to unpredictable and very wrong behavior. Just convert the bytearray
to an immutable type, or work with immutable types in the first place.
And for the inquisitive: sure, buffer
caches the first hash, but that doesn't help at all. There are only two key values, so this should generate only two dict entries:
>>> a, b, c = bytearray('abcd'), bytearray('abcd'), bytearray('abzd')
>>> a_buf, b_buf, c_buf = buffer(a), buffer(b), buffer(c)
>>> d = {b_buf:1, c_buf:2}
>>> b[2] = 'z'
>>> d[a_buf] = 0
But it generates three:
>>> d
{<read-only buffer for 0x1004a2300, size -1, offset 0 at 0x100499cb0>: 1,
<read-only buffer for 0x1004a2420, size -1, offset 0 at 0x100499cf0>: 0,
<read-only buffer for 0x1004a22d0, size -1, offset 0 at 0x100499c70>: 2}
If you're concerned about time, and the key that you are using is always the same object, you can use its id
(location in memory) as the key in your dictionary:
b = some byte array
d[id(b)] = x
If you're concerned about memory, you can use a good cryptographic hash function over your byte array, and you'll probably never get a collision (git, for example, uses sha1, and there are some discussions out on the internet about how likely it is to get an inadvertent sha1 collision). If you're okay with that infinitesimal risk, you could:
b = some byte array
d[hashlib.sha1(b).hexdigest()] = x
That's going to be O(n) in the size of your byte array in time (each time you calculate the hash), but you'd be able to have a different byte array read in at a later time, but representing the same sequence of bytes, that would hash to the same dictionary key.
And @senderle is absolutely right; you don't want to use an object that is actually mutable, when using it by value (as opposed to an immutable function of it, like id()
) as the key to a dictionary. The hash of an object used as dictionary key must not change; it violates an invariant of what the dictionary object expects out of a hash function.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With