Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Any suggestion on how to improve the performance of a Java String to byte[] conversion?

I've inherited a piece of code that makes intensive use of String -> byte[] conversions and vice versa for some homegrown serialisation code. Essentially the Java objects know how to convert their constituent parts into Strings which then get converted into a byte[]. Said byte array is then passed through JNI into C++ code that reconstitutes the byte[] into C++ std::strings and uses those to bootstrap C++ objects which mirror the Java objects. There is a little more to it but this is a high level view of how this piece of code works; The communication works like this in both directions such that the C++ -> Java transition is a mirror image of the Java -> C++ transition I mentioned above.

One part of this code - the actual conversion of a String into a byte[] - is unexpectedly showing up in the profiler as burning a lot of CPU. Granted, there is a lot of data that is being transferred but this is an unexpected bottleneck.

The basic outline of the code is as follows:

public void convertToByteArray(String convert_me, ByteArrayOutputStream stream)
{
  stream.write(convert_me.getBytes());
}

There is a little more to the function but not much. The above function gets called once for every String/Stringified object and after all of the constituents are written to the ByteArrayOutputStream, the ByteArrayOutputStream gets converted into a byte[]. Breaking the above down into a more profiler-friendly version by extracting the convert_me.getBytes() call shows that over 90% of the time in this function is spent in the getBytes() call.

Is there a way to improve upon the performance of the getBytes() call or is there another, potentially faster way to achieve the same conversion?

The number of objects that are being converted is quite large. On the profiling runs which are using only a small subset of the production data, I'm seeing something like 10 million plus calls to the above conversion function.

Due to the fact that we're very close to releasing the project into production, there are a few workarounds that aren't possible at this point in time:

  • Rewrite the serialisation interface to just pass String objects across the JNI layer. This is the obvious (to me) way of improving the situation but it would require major reengineering of the serialisation layer. Given the fact that we're going into UAT early this week, it's far too late to make this sort of complex change. It is my top todo for the next release so it will be done; I however do need a workaround until then, but so far the code is working, has been used for years and has most of the kinks worked out. Well, aside from the performance.
  • Changing the the JVM (currently 1.5) is also not an option. Unfortunately this is the default JVM that is installed on the client's machines and updating to 1.6 (which might or might not be faster in this case) is unfortunately not possible. Anybody who has worked in large organisations probably understands why...
  • In addition to this, we're already running into memory constraints so attempting to cache at least the larger Strings and their byte array representation, while being a potentially elegant solution, is likely to cause more problems than it will solve
like image 658
Timo Geusch Avatar asked Jun 21 '09 11:06

Timo Geusch


4 Answers

I'm guessing part of the problem may be that a Java String is in UTF-16 format - i.e. two bytes per character; so getBytes() is doing a bunch of work to convert each UTF-16 element into one or two bytes, dependent on your current character set.

Have you tried using CharsetEncoder - this should give you more control over the String encoding and allow you to skip some of the overhead in the default getBytes implementation.

Alternatively, have you tried explicitly specifying the charset to getBytes, and use US-ASCII as the character set?

like image 176
DaveR Avatar answered Oct 06 '22 22:10

DaveR


I see several options:

  • If you have Latin-1 strings, you could just split the higher byte of the chars in the string (Charset does this too I think)
  • You could also split the work among multiple cores if you have more (the fork-join framework had backport to 1.5 once)
  • You could also build the data into a stringbuilder and only convert it to byte array once at the end.
  • Look at your GC/memory usage. Too much memory utilization might slow your algorithms down due frequent GC interruptions
  • like image 41
    akarnokd Avatar answered Oct 06 '22 22:10

    akarnokd


    If it is the same strings you convert all the times, you could cache the result in a WeakHashMap.

    Also, have a look at the getBytes() method (the source is available if you install the SDK) to see what exactly it does.

    like image 26
    Thorbjørn Ravn Andersen Avatar answered Oct 06 '22 23:10

    Thorbjørn Ravn Andersen


    The problem is that all methods in Java, even today, allocate memory with UTF-8 production. To get the encoding performant you'd need to write custom code and reuse the byte[] buffer. Colfer can generate the code or just simply copy its implementation.

    https://github.com/pascaldekloe/colfer/blob/4c6f022c5183c0aebb8bc73e8137f976d31b1083/java/gen/O.java#L414

    like image 39
    Pascal de Kloe Avatar answered Oct 06 '22 22:10

    Pascal de Kloe