Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

can array access be optimized?

Maybe I'm being misled by my profiler (Netbeans), but I'm seeing some odd behavior, hoping maybe someone here can help me understand it.

I am working on an application, which makes heavy use of rather large hash tables (keys are longs, values are objects). The performance with the built in java hash table (HashMap specifically) was very poor, and after trying some alternatives -- Trove, Fastutils, Colt, Carrot -- I started working on my own.

The code is very basic using a double hashing strategy. This works fine and good and shows the best performance of all the other options I've tried thus far.

The catch is, according to the profiler, lookups into the hash table are the single most expensive method in the entire application -- despite the fact that other methods are called many more times, and/or do a lot more logic.

What really confuses me is the lookups are called only by one class; the calling method does the lookup and processes the results. Both are called nearly the same number of times, and the method that calls the lookup has a lot of logic in it to handle the result of the lookup, but is about 100x faster.

Below is the code for the hash lookup. It's basically just two accesses into an array (the functions that compute the hash codes, according to profiling, are virtually free). I don't understand how this bit of code can be so slow since it is just array access, and I don't see any way of making it faster.

Note that the code simply returns the bucket matching the key, the caller is expected to process the bucket. 'size' is the hash.length/2, hash1 does lookups in the first half of the hash table, hash2 does lookups in the second half. key_index is a final int field on the hash table passed into the constructor, and the values array on the Entry objects is a small array of longs usually of length 10 or less.

Any thoughts people have on this are much appreciated.

Thanks.

public final Entry get(final long theKey) {
    Entry aEntry = hash[hash1(theKey, size)];

    if (aEntry != null && aEntry.values[key_index] != theKey) {
        aEntry = hash[hash2(theKey, size)];

        if (aEntry != null && aEntry.values[key_index] != theKey) {
            return null;
        }
    }

    return aEntry;
}

Edit, the code for hash1 & hash2

private static int hash1(final long key, final int hashTableSize) { 
    return (int)(key&(hashTableSize-1)); 
}
private static int hash2(final long key, final int hashTableSize) { 
    return (int)(hashTableSize+((key^(key>>3))&(hashTableSize-1))); 
}
like image 667
Michael Avatar asked Feb 25 '23 22:02

Michael


2 Answers

Nothing in your implementation strikes me as particularly inefficient. I'll admit I don't really follow your hashing/lookup strategy, but if you say it's performant in your circumstances, I'll believe you.

The only thing that I would expect might make some difference is to move the key out of the values array of Entry.

Instead of having this:

class Entry {
    long[] values;
}

//...
if ( entry.values[key_index] == key ) { //...

Try this:

class Entry {
    long key;
    long values[];
}

//...
if ( entry.key == key ) { //...

Instead of incurring the cost of accessing a member, plus doing bounds checking, then getting a value of the array, you should just incur the cost of accessing the member.

Is there a random-access data type faster than an array?

I was interested in the answer to this question, so I set up a test environment. This is my Array interface:

interface Array {
    long get(int i);
    void set(int i, long v);
}

This "Array" has undefined behaviour when indices are out of bounds. I threw together the obvious implementation:

class NormalArray implements Array {
    private long[] data;

    public NormalArray(int size) {
        data = new long[size];
    }

    @Override
    public long get(int i) {
        return data[i];
    }

    @Override
    public void set(int i, long v) {
        data[i] = v;
    }
}

And then a control:

class NoOpArray implements Array {
    @Override
    public long get(int i) {
        return 0;
    }
    @Override
    public void set(int i, long v) {
    }
}

Finally, I designed an "array" where the first 10 indices are hardcoded members. The members are set/selected through a switch:

class TenArray implements Array {
    private long v0;
    private long v1;
    private long v2;
    private long v3;
    private long v4;
    private long v5;
    private long v6;
    private long v7;
    private long v8;
    private long v9;
    private long[] extras;

    public TenArray(int size) {
        if (size > 10) {
            extras = new long[size - 10];
        }
    }

    @Override
    public long get(final int i) {
        switch (i) {
        case 0:
            return v0;
        case 1:
            return v1;
        case 2:
            return v2;
        case 3:
            return v3;
        case 4:
            return v4;
        case 5:
            return v5;
        case 6:
            return v6;
        case 7:
            return v7;
        case 8:
            return v8;
        case 9:
            return v9;
        default:
            return extras[i - 10];
        }
    }

    @Override
    public void set(final int i, final long v) {
        switch (i) {
        case 0:
            v0 = v; break;
        case 1:
            v1 = v; break;
        case 2:
            v2 = v; break;
        case 3:
            v3 = v; break;
        case 4:
            v4 = v; break;
        case 5:
            v5 = v; break;
        case 6:
            v6 = v; break;
        case 7:
            v7 = v; break;
        case 8:
            v8 = v; break;
        case 9:
            v9 = v; break;
        default:
            extras[i - 10] = v;
        }
    }
}

I tested it with this harness:

import java.util.Random;

public class ArrayOptimization {
    public static void main(String[] args) {
        int size = 10;
        long[] data = new long[size];
        Random r = new Random();
        for ( int i = 0; i < data.length; i++ ) {
            data[i] = r.nextLong();
        }

        Array[] a = new Array[] {
                new NoOpArray(),
                new NormalArray(size),
                new TenArray(size)
        };

        for (;;) {
            for ( int i = 0; i < a.length; i++ ) {
                testSet(a[i], data, 10000000);
                testGet(a[i], data, 10000000);
            }
        }
    }

    private static void testGet(Array a, long[] data, int iterations) {
            long nanos = System.nanoTime();
        for ( int i = 0; i < iterations; i++ ) {
            for ( int j = 0; j < data.length; j++ ) {
                data[j] = a.get(j);
            }
        }
        long stop = System.nanoTime();
        System.out.printf("%s/get took %fms%n", a.getClass().getName(), 
                (stop - nanos) / 1000000.0);
    }

    private static void testSet(Array a, long[] data, int iterations) {
        long nanos = System.nanoTime();
        for ( int i = 0; i < iterations; i++ ) {
            for ( int j = 0; j < data.length; j++ ) {
                a.set(j, data[j]);
            }
        }
        long stop = System.nanoTime();
        System.out.printf("%s/set took %fms%n", a.getClass().getName(), 
                (stop - nanos) / 1000000.0);

    }
}

The results were somewhat surprising. The TenArray performs non-trivially faster than a NormalArray does (for sizes <= 10). Subtracting the overhead (using the NoOpArray average) you get TenArray as taking ~65% of the time of the normal array. So if you know the likely max size of your array, I suppose it is possible to exceed the speed of an array. I would imagine switch uses either less bounds checking or more efficient bounds checking than does an array.

NoOpArray/set took 953.272654ms
NoOpArray/get took 891.514622ms
NormalArray/set took 1235.694953ms
NormalArray/get took 1148.091061ms
TenArray/set took 1149.833109ms
TenArray/get took 1054.040459ms
NoOpArray/set took 948.458667ms
NoOpArray/get took 888.618223ms
NormalArray/set took 1232.554749ms
NormalArray/get took 1120.333771ms
TenArray/set took 1153.505578ms
TenArray/get took 1056.665337ms
NoOpArray/set took 955.812843ms
NoOpArray/get took 893.398847ms
NormalArray/set took 1237.358472ms
NormalArray/get took 1125.100537ms
TenArray/set took 1150.901231ms
TenArray/get took 1057.867936ms

Now whether you can in practice get speeds faster than an array I'm not sure; obviously this way you incur any overhead associated with the interface/class/methods.

like image 105
Mark Peters Avatar answered Mar 07 '23 23:03

Mark Peters


Most likely you are partially misled in your interpretation of the profilers results. Profilers are notoriously overinflating the performance impact of small, frequently called methods. In your case, the profiling overhead for the get()-method is probably larger than the actual processing spent in the method itself. The situation is worsened further, since the instrumentation also interferes with the JIT's capability to inline methods.

As a rule of thumb for this situation - if the total processing time for a piece of work of known length increases more then two- to threefold when running under the profiler, the profiling overhead will give you skewed results.

To verify your changes actually do have impact, always measure performance improvements without the profiler, too. The profiler can hint you about bottlenecks, but it can also deceive you to look at places where nothing is wrong.

Array bounds checking can have a surprisingly large impact on performance (if you do comparably little else), but it can also be hard to clearly separate from general memory access penalties. In some trivial cases, the JIT might be able to eliminate them (there have been efforts towards bounds check elimination in Java 6), but this is AFAIK mostly limited to simple loop constructs like for(x=0; x<array.length; x++). Under some circumstances you may be able to replace array access by simple member access, completely avoiding the bound checks, but its limited to the rare cases where you access you array exclusively by constant indices. I see no way to apply it to your problem.

The change suggested by Mark Peters is most likely not solely faster because it eliminates a bounds check, but also because it alters the locality properties of your data structures in a more cache friendly way.

like image 41
Durandal Avatar answered Mar 07 '23 22:03

Durandal