I've inherited a piece of code that makes intensive use of String -> byte[] conversions and vice versa for some homegrown serialisation code. Essentially the Java objects know how to convert their constituent parts into Strings which then get converted into a byte[]. Said byte array is then passed through JNI into C++ code that reconstitutes the byte[] into C++ std::strings and uses those to bootstrap C++ objects which mirror the Java objects. There is a little more to it but this is a high level view of how this piece of code works; The communication works like this in both directions such that the C++ -> Java transition is a mirror image of the Java -> C++ transition I mentioned above.
One part of this code - the actual conversion of a String into a byte[] - is unexpectedly showing up in the profiler as burning a lot of CPU. Granted, there is a lot of data that is being transferred but this is an unexpected bottleneck.
The basic outline of the code is as follows:
public void convertToByteArray(String convert_me, ByteArrayOutputStream stream)
{
stream.write(convert_me.getBytes());
}
There is a little more to the function but not much. The above function gets called once for every String/Stringified object and after all of the constituents are written to the ByteArrayOutputStream, the ByteArrayOutputStream gets converted into a byte[]. Breaking the above down into a more profiler-friendly version by extracting the convert_me.getBytes()
call shows that over 90% of the time in this function is spent in the getBytes() call.
Is there a way to improve upon the performance of the getBytes() call or is there another, potentially faster way to achieve the same conversion?
The number of objects that are being converted is quite large. On the profiling runs which are using only a small subset of the production data, I'm seeing something like 10 million plus calls to the above conversion function.
Due to the fact that we're very close to releasing the project into production, there are a few workarounds that aren't possible at this point in time:
I'm guessing part of the problem may be that a Java String is in UTF-16 format - i.e. two bytes per character; so getBytes()
is doing a bunch of work to convert each UTF-16 element into one or two bytes, dependent on your current character set.
Have you tried using CharsetEncoder - this should give you more control over the String encoding and allow you to skip some of the overhead in the default getBytes
implementation.
Alternatively, have you tried explicitly specifying the charset to getBytes
, and use US-ASCII as the character set?
I see several options:
If it is the same strings you convert all the times, you could cache the result in a WeakHashMap.
Also, have a look at the getBytes() method (the source is available if you install the SDK) to see what exactly it does.
The problem is that all methods in Java, even today, allocate memory with UTF-8 production. To get the encoding performant you'd need to write custom code and reuse the byte[] buffer. Colfer can generate the code or just simply copy its implementation.
https://github.com/pascaldekloe/colfer/blob/4c6f022c5183c0aebb8bc73e8137f976d31b1083/java/gen/O.java#L414
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With