Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to correctly compute the length of a String in Java?

Tags:

I know there is String#length and the various methods in Character which more or less work on code units/code points.

What is the suggested way in Java to actually return the result as specified by Unicode standards (UAX#29), taking things like language/locale, normalization and grapheme clusters into account?

like image 263
soc Avatar asked Jul 26 '11 09:07

soc


People also ask

Is length () a string method in Java?

Java String length() MethodThe length() method returns the length of a specified string.

How can we calculate the length of the string?

As you know, the best way to find the length of a string is by using the strlen() function.


1 Answers

The normal model of Java string length

String.length() is specified as returning the number of char values ("code units") in the String. That is the most generally useful definition of the length of a Java String; see below.

Your description1 of the semantics of length based on the size of the backing array/array slice is incorrect. The fact that the value returned by length() is also the size of the backing array or array slice is merely an implementation detail of typical Java class libraries. String does not need to be implemented that way. Indeed, I think I've seen Java String implementations where it WASN'T implemented that way.


Alternative models of string length.

To get the number of Unicode codepoints in a String use str.codePointCount(0, str.length()) -- see the javadoc.

To get the size (in bytes) of a String in a specific encoding (i.e. charset) use str.getBytes(charset).length2.

To deal with locale-specific issues, you can use Normalizer to normalize the String to whatever form is most appropriate to your use-case, and then use codePointCount as above. But in some cases, even this won't work; e.g. the Hungarian letter counting rules which the Unicode standard apparently doesn't cater for.


Using String.length() is generally OK

The reason that most applications use String.length() is that most applications are not concerned with counting the number of characters in words, texts, etcetera in a human-centric way. For instance, if I do this:

String s = "hi mum how are you"; int pos = s.indexOf("mum"); String textAfterMum = s.substring(pos + "mum".length()); 

it really doesn't matter that "mum".length() is not returning code points or that it is not a linguistically correct character count. It is measuring the length of the string using the model that is appropriate to the task at hand. And it works.

Obviously, things get a bit more complicated when you do multilingual text analysis; e.g. searching for words. But even then, if you normalize your text and parameters before you start, you can safely code in terms of "code units" rather than "code points" most of the time; i.e. length() still works.


1 - This description was on some versions of the question. See the edit history ... if you have sufficient rep points.
2 - Using str.getBytes(charset).length entails doing the encoding and throwing it away. There is possibly a general way to do this without that copy. It would entail wrapping the String as a CharBuffer, creating a custom ByteBuffer with no backing to act as a byte counter, and then using Encoder.encode(...) to count the bytes. Note: I have not tried this, and I would not recommend trying unless you have clear evidence that getBytes(charset) is a significant performance bottleneck.

like image 143
Stephen C Avatar answered Oct 19 '22 08:10

Stephen C