For one of my applications, the following function has to be called very often. This function takes up a lot of CPU and hence I am wondering if you know how to improve the performance.
The code counts the occurrences of a combination of four characters. During testing, I found that the number of entries in the map are around 100. The length of text is in the range of 100 to 800. The initial size of 200 is a guess and the code seems to run faster than without specifying an initial size. It is probably not the optimal value, though.
private Map<String, Integer> getTetagramCount(final String text) {
final Map<String, Integer> cipherTetagrams = new HashMap<String, Integer>(200);
for (int i = 0; i < text.length() - 4; i++) {
final String tet = text.substring(i, i + 4);
final Integer count = cipherTetagrams.get(tet);
if (count != null) {
cipherTetagrams.put(tet, count + 1);
} else {
cipherTetagrams.put(tet, 1);
}
}
return cipherTetagrams;
}
Approach : Declare a HashMap in Java of <String, Integer> Split the given string and store the words into a String array. Traversing the array, check if the word is in the HashMap or not.
Starting from Java 8, one optimization is built-in in HashMap: When buckets are getting too large, they're transformed into trees, instead of linked lists. That brings the pessimistic time of O(n) to O(log(n)), which is much better. For that to work, the keys of HashMap need to implement the Comparable interface.
In Java, Integer is immutable. It prevents us modifing the integer value after construction. Therefore, to increment a counter, we have to first get the integer from the map, then create another new integer by adding one, and put it back to the map.
I do a lot of work in NLP and machine learning, so I have to do this sort of thing all the time, and there are a ton of opportunities for optimization.
A few points to consider:
First of all, you're getting killed by the standard JDK HashMap class. It's a nice container for general-purpose computing, but it's terrible for high-performance computing. For each entry into your collection (a four-character string (8 bytes) and an integer (4 bytes), the standard java HashMap will consume:
So your tiny little 12 bytes of data becomes 64 bytes. And that's before the HashMap has allocated an array of hash values to use during its lookup operations. Keep in mind that all those tiny little objects mean more work for the GC, but more importantly, it means that your objects span a larger swath of main memory and are less likely to fit within the CPU cache. When you have lots of cache misses, you lose performance.
NOTE: A commenter reminded me that all the substrings will share the same underlying character array, which is a good point that I'd forgotten about. But still, that means each map entry goes from being 64 bytes down to 44 bytes. Which is still a shame, when there should only be 12 bytes.
Boxing and unboxing all those integer values causes your code to run more slowly and to consume more memory. In most cases, we don't really care about that, and the vanilla HashMap implementation is fine, even with its mandatory boxing and greedy memory consumption. But in your case, if this code is run within a tight inner loop, we'd rather have a specialized class that knows its values will always be integers and eliminates the need for boxing.
If you dig down into the JDK source code, you'll see that your code will end up invoking string's hashCode()
and equals()
methods twice. Once for the map.get()
and once for the map.put()
. But there's a different kind of collection called a HashBag that can perform lookup, insertion, and count incrementation with only one lookup. A "bag" collection is kind of like a "set", except that it can contain duplicates, and it keeps track of how many duplicates there are. For each of your tetragrams, you'd just call bag.put(tetragram)
without having to first retrieve and update a count. Unfortunately, there are no bag implementations in the JDK, so you'll need to find one elsewhere, or write one yourself.
Luckily, your tetragrams can be losslessly encoded as long
values (since each java character is 2 bytes wide, and a long
gives you eight bytes to work with). You can therefore iterate through the character array and convert each tetragram to a long
, and avoid all the overhead of constructing so many tiny strings. Then, you can keep your results in a LongIntHashMap
(from the Trove library). This will be much faster than your current implementation, because you can avoid creating all those tiny little string objects.
Although the Trove LongIntHashMap
is quite excellent, it isn't quite as good as a LongHashBag
would be. There's no equals
invocation (since longs can be compared with the == operator), but you'll still pay the price of two hashCode
invocations. If you want to get really aggressive with optimization, you could look at the source code of the LongIntHashMap
and figure out how to modify it into a LongHashBag
. It's not that difficult, and ultimately, that's exactly what I've done in my own code.
Okay, here's a little bit of code:
private LongHashBag countTetragrams(String text) {
// Homework assignment: find a good LongHashBag implementation, or
// grab the LongIntHashMap implementation from Trove, and tweak it
// to work as a Bag
LongHashBag bag = new LongHashBag(500);
// There are no tetragrams in this string.
if (text.length() < 4) return bag;
// Shortcut: if we calculate the first tetragram before entering
// the loop, then we can use bit-shifting logic within the loop
// to create all subsequent tetragram values.
char[] c = text.toCharArray();
long tetragram = ((long) c[0] << 48) |
(((long) c[1]) << 32) |
(((long) c[2]) << 16) |
((long) c[3]);
bag.add(tetragram);
for (int i = 4, last = text.length(); i < last; i++) {
// During each loop iteration, the leftmost 2-bytes are shifted
// out of the tetragram, to make room for the 2-bytes from the
// current character.
tetragram = (tetragram << 16) | ((long) c[i]);
bag.add(tetragram);
}
return bag;
}
I just did some testing of various solutions, and I was about to get about a 25% performance improvement using the LongHashBag
approach instead of the standard HashMap
approach.
However, I was about to get a 300% improvement by recycling the resultant objects. Basically, instead of doing this:
private LongHashBag countTetragrams(String text) {
// Creates a new HashBag on every invocation. Very wasteful.
LongHashBag bag = new LongHashBag(500);
// ...blah blah blah...
return bag;
}
...I'm now doing this...
private void countTetragrams(String text, LongHashBag bag) {
// Return the object to a neutral state, and recycle it.
bag.clear()
// ...blah blah blah...
}
The calling code is responsible for creating the LongHashBag object and ensuring that we're done with it by the time we call the count method again.
But it would also work to do this...
private LongHashBag countTetragrams(String text) {
// Return the object to a neutral state, and recycle it.
LongHashBag bag = retrieveLongHashBagFromObjectPool();
// ...blah blah blah...
return bag;
}
... which will add a little bit of overhead for maintaining the pool. And the calling code would have to remember to put the bag back into the pool when it finishes using it. But the performance benefits could definitely be worth it.
Incidentally, these are exactly the kinds of tricks I use every day. Object pooling has become one of my most reliable tricks for improving performance.
But like I said, recycling those objects yields a 300% performance improvement.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With