Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fast HashCode of a Complex Object Graph

I have a pretty complex object and I need to get uniqueness of these objects. One solution can be done by overriding GetHashCode(). I have implemented a code noted below:

public override int GetHashCode()
{
    return this._complexObject1.GetHashCode() ^
           this._complexObject2.GetHashCode() ^
           this._complexObject3.GetHashCode() ^
           this._complexObject4.GetHashCode() ^
           this._complexObject5.GetHashCode() ^
           this._complexObject6.GetHashCode() ^
           this._complexObject7.GetHashCode() ^
           this._complexObject8.GetHashCode();
}

These complex objects also overrides GetHashCode() and does similar operations.

My project requires uniqueness of these objects which I deal with these very frequently, and data inside also changes in various ways and places.

I need a faster way to find uniqueness of these complex objects, which need to consider performance and memory.

Thanks in advance
Munim

like image 897
Abdul Munim Avatar asked Dec 29 '22 05:12

Abdul Munim


1 Answers

Given your comment, it sounds like you may be trying to rely on GetHashCode on its own to determine uniqueness. Don't do that. Hashes aren't meant to be unique - it's meant to be unlikely that two unequal objects will hash to the same value, but not impossible. If you're trying to check that a set of objects has no duplicates, you will have to use Equals as well.

Note that using XOR for a hashcode can make it more likely that you'll get hash collisions, depending on the individual hash values involved. In particular, it makes any two equal fields "cancel each other out". I generally use this form:

int hash = 17;
hash = hash * 31 + field1.GetHashCode();
hash = hash * 31 + field2.GetHashCode();
hash = hash * 31 + field3.GetHashCode();
hash = hash * 31 + field4.GetHashCode();
...
return hash;

... but even so, that's certainly not going to guarantee uniqueness. You should use GetHashCode() to rule out equality, and then use Equals to check the actual equality of any potentially equal values.

Now your question mentions speed - this sounds like the perfect place to use a profiler and some benchmark tests. Are you sure this is a bottleneck? If you have many different types all computing hash values, have you found out which of these is the biggest contributor to the problem?

Some optimisations will depend on exactly how you use the data. If you find that a lot of your time is spent recomputing hashes for values which you know haven't changed, you could cache the hash code... although this obviously becomes trickier when there are fields which themselves refer to complex objects. It's possible that you could cache "leaf node" hashes, particularly if those leaf nodes don't change often (but their usage could vary).

like image 86
Jon Skeet Avatar answered Jan 12 '23 07:01

Jon Skeet