The subexpression/metacharacter “\b” matches the word boundaries when outside the brackets. Matches the backspace (0x08) when inside the brackets.
\w (word character) matches any single letter, number or underscore (same as [a-zA-Z0-9_] ). The uppercase counterpart \W (non-word-character) matches any single character that doesn't match by \w (same as [^a-zA-Z0-9_] ). In regex, the uppercase metacharacter is always the inverse of the lowercase counterpart.
To match a specific Unicode code point, use \uFFFF where FFFF is the hexadecimal number of the code point you want to match. You must always specify 4 hexadecimal digits E.g. \u00E0 matches à, but only when encoded as a single code point U+00E0.
The source code for the rewriting functions I discuss below is available here.
Sun’s updated Pattern
class for JDK7 has a marvelous new flag, UNICODE_CHARACTER_CLASS
, which makes everything work right again. It’s available as an embeddable (?U)
for inside the pattern, so you can use it with the String
class’s wrappers, too. It also sports corrected definitions for various other properties, too. It now tracks The Unicode Standard, in both RL1.2 and RL1.2a from UTS#18: Unicode Regular Expressions. This is an exciting and dramatic improvement, and the development team is to be commended for this important effort.
The problem with Java regexes is that the Perl 1.0 charclass escapes — meaning \w
, \b
, \s
, \d
and their complements — are not in Java extended to work with Unicode. Alone amongst these, \b
enjoys certain extended semantics, but these map neither to \w
, nor to Unicode identifiers, nor to Unicode line-break properties.
Additionally, the POSIX properties in Java are accessed this way:
POSIX syntax Java syntax
[[:Lower:]] \p{Lower}
[[:Upper:]] \p{Upper}
[[:ASCII:]] \p{ASCII}
[[:Alpha:]] \p{Alpha}
[[:Digit:]] \p{Digit}
[[:Alnum:]] \p{Alnum}
[[:Punct:]] \p{Punct}
[[:Graph:]] \p{Graph}
[[:Print:]] \p{Print}
[[:Blank:]] \p{Blank}
[[:Cntrl:]] \p{Cntrl}
[[:XDigit:]] \p{XDigit}
[[:Space:]] \p{Space}
This is a real mess, because it means that things like Alpha
, Lower
, and Space
do not in Java map to the Unicode Alphabetic
, Lowercase
, or Whitespace
properties. This is exceeedingly annoying. Java’s Unicode property support is strictly antemillennial, by which I mean it supports no Unicode property that has come out in the last decade.
Not being able to talk about whitespace properly is super-annoying. Consider the following table. For each of those code points, there is both a J-results column for Java and a P-results column for Perl or any other PCRE-based regex engine:
Regex 001A 0085 00A0 2029
J P J P J P J P
\s 1 1 0 1 0 1 0 1
\pZ 0 0 0 0 1 1 1 1
\p{Zs} 0 0 0 0 1 1 0 0
\p{Space} 1 1 0 1 0 1 0 1
\p{Blank} 0 0 0 0 0 1 0 0
\p{Whitespace} - 1 - 1 - 1 - 1
\p{javaWhitespace} 1 - 0 - 0 - 1 -
\p{javaSpaceChar} 0 - 0 - 1 - 1 -
See that?
Virtually every one of those Java white space results is ̲w̲r̲o̲n̲g̲ according to Unicode. It’s a really big problem. Java is just messed up, giving answers that are “wrong” according to existing practice and also according to Unicode. Plus Java doesn’t even give you access to the real Unicode properties! In fact, Java does not support any property that corresponds to Unicode whitespace.
To deal with this and many other related problems, yesterday I wrote a Java function to rewrite a pattern string that rewrites these 14 charclass escapes:
\w \W \s \S \v \V \h \H \d \D \b \B \X \R
by replacing them with things that actually work to match Unicode in a predictable and consistent fashion. It’s only an alpha prototype from a single hack session, but it is completely functional.
The short story is that my code rewrites those 14 as follows:
\s => [\u0009-\u000D\u0020\u0085\u00A0\u1680\u180E\u2000-\u200A\u2028\u2029\u202F\u205F\u3000]
\S => [^\u0009-\u000D\u0020\u0085\u00A0\u1680\u180E\u2000-\u200A\u2028\u2029\u202F\u205F\u3000]
\v => [\u000A-\u000D\u0085\u2028\u2029]
\V => [^\u000A-\u000D\u0085\u2028\u2029]
\h => [\u0009\u0020\u00A0\u1680\u180E\u2000-\u200A\u202F\u205F\u3000]
\H => [^\u0009\u0020\u00A0\u1680\u180E\u2000\u2001-\u200A\u202F\u205F\u3000]
\w => [\pL\pM\p{Nd}\p{Nl}\p{Pc}[\p{InEnclosedAlphanumerics}&&\p{So}]]
\W => [^\pL\pM\p{Nd}\p{Nl}\p{Pc}[\p{InEnclosedAlphanumerics}&&\p{So}]]
\b => (?:(?<=[\pL\pM\p{Nd}\p{Nl}\p{Pc}[\p{InEnclosedAlphanumerics}&&\p{So}]])(?![\pL\pM\p{Nd}\p{Nl}\p{Pc}[\p{InEnclosedAlphanumerics}&&\p{So}]])|(?<![\pL\pM\p{Nd}\p{Nl}\p{Pc}[\p{InEnclosedAlphanumerics}&&\p{So}]])(?=[\pL\pM\p{Nd}\p{Nl}\p{Pc}[\p{InEnclosedAlphanumerics}&&\p{So}]]))
\B => (?:(?<=[\pL\pM\p{Nd}\p{Nl}\p{Pc}[\p{InEnclosedAlphanumerics}&&\p{So}]])(?=[\pL\pM\p{Nd}\p{Nl}\p{Pc}[\p{InEnclosedAlphanumerics}&&\p{So}]])|(?<![\pL\pM\p{Nd}\p{Nl}\p{Pc}[\p{InEnclosedAlphanumerics}&&\p{So}]])(?![\pL\pM\p{Nd}\p{Nl}\p{Pc}[\p{InEnclosedAlphanumerics}&&\p{So}]]))
\d => \p{Nd}
\D => \P{Nd}
\R => (?:(?>\u000D\u000A)|[\u000A\u000B\u000C\u000D\u0085\u2028\u2029])
\X => (?>\PM\pM*)
Some things to consider...
That uses for its \X
definition what Unicode now refers to as a legacy grapheme cluster, not an extended grapheme cluster, as the latter is rather more complicated. Perl itself now uses the fancier version, but the old version is still perfectly workable for the most common situations. EDIT: See addendum at bottom.
What to do about \d
depends on your intent, but the default is the Uniode definition. I can see people not always wanting \p{Nd}
, but sometimes either [0-9]
or \pN
.
The two boundary definitions, \b
and \B
, are specifically written to use the \w
definition.
That \w
definition is overly broad, because it grabs the parenned letters not just the circled ones. The Unicode Other_Alphabetic
property isn’t available until JDK7, so that’s the best you can do.
Boundaries have been a problem ever since Larry Wall first coined the \b
and \B
syntax for talking about them for Perl 1.0 back in 1987. The key to understanding how \b
and \B
both work is to dispel two pervasive myths about them:
\w
word characters, never for non-word characters.A \b
boundary means:
IF does follow word
THEN doesn't precede word
ELSIF doesn't follow word
THEN does precede word
And those are all defined perfectly straightforwardly as:
(?<=\w)
.(?=\w)
.(?<!\w)
.(?!\w)
.Therefore, since IF-THEN
is encoded as an and
ed-together AB
in regexes, an or
is X|Y
, and because the and
is higher in precedence than or
, that is simply AB|CD
. So every \b
that means a boundary can be safely replaced with:
(?:(?<=\w)(?!\w)|(?<!\w)(?=\w))
with the \w
defined in the appropriate way.
(You might think it strange that the A
and C
components are opposites. In a perfect world, you should be able to write that AB|D
, but for a while I was chasing down mutual exclusion contradictions in Unicode properties — which I think I’ve taken care of, but I left the double condition in the boundary just in case. Plus this makes it more extensible if you get extra ideas later.)
For the \B
non-boundaries, the logic is:
IF does follow word
THEN does precede word
ELSIF doesn't follow word
THEN doesn't precede word
Allowing all instances of \B
to be replaced with:
(?:(?<=\w)(?=\w)|(?<!\w)(?!\w))
This really is how \b
and \B
behave. Equivalent patterns for them are
\b
using the ((IF)THEN|ELSE)
construct is (?(?<=\w)(?!\w)|(?=\w))
\B
using the ((IF)THEN|ELSE)
construct is (?(?=\w)(?<=\w)|(?<!\w))
But the versions with just AB|CD
are fine, especially if you lack conditional patterns in your regex language — like Java. ☹
I’ve already verified the behaviour of the boundaries using all three equivalent definitions with a test suite that checks 110,385,408 matches per run, and which I've run on a dozen different data configurations according to:
0 .. 7F the ASCII range
80 .. FF the non-ASCII Latin1 range
100 .. FFFF the non-Latin1 BMP (Basic Multilingual Plane) range
10000 .. 10FFFF the non-BMP portion of Unicode (the "astral" planes)
However, people often want a different sort of boundary. They want something that is whitespace and edge-of-string aware:
(?:(?<=^)|(?<=\s))
(?=$|\s)
The code I posted in my other answer provides this and quite a few other conveniences. This includes definitions for natural-language words, dashes, hyphens, and apostrophes, plus a bit more.
It also allows you to specify Unicode characters in logical code points, not in idiotic UTF-16 surrogates. It’s hard to overstress how important that is! And that’s just for the string expansion.
For regex charclass substitution that makes the charclass in your Java regexes finally work on Unicode, and work correctly, grab the full source from here. You may do with it as you please, of course. If you make fixes to it, I’d love to hear of it, but you don’t have to. It’s pretty short. The guts of the main regex rewriting function is simple:
switch (code_point) {
case 'b': newstr.append(boundary);
break; /* switch */
case 'B': newstr.append(not_boundary);
break; /* switch */
case 'd': newstr.append(digits_charclass);
break; /* switch */
case 'D': newstr.append(not_digits_charclass);
break; /* switch */
case 'h': newstr.append(horizontal_whitespace_charclass);
break; /* switch */
case 'H': newstr.append(not_horizontal_whitespace_charclass);
break; /* switch */
case 'v': newstr.append(vertical_whitespace_charclass);
break; /* switch */
case 'V': newstr.append(not_vertical_whitespace_charclass);
break; /* switch */
case 'R': newstr.append(linebreak);
break; /* switch */
case 's': newstr.append(whitespace_charclass);
break; /* switch */
case 'S': newstr.append(not_whitespace_charclass);
break; /* switch */
case 'w': newstr.append(identifier_charclass);
break; /* switch */
case 'W': newstr.append(not_identifier_charclass);
break; /* switch */
case 'X': newstr.append(legacy_grapheme_cluster);
break; /* switch */
default: newstr.append('\\');
newstr.append(Character.toChars(code_point));
break; /* switch */
}
saw_backslash = false;
Anyway, that code is just an alpha release, stuff I hacked up over the weekend. It won’t stay that way.
For the beta I intend to:
fold together the code duplication
provide a clearer interface regarding unescaping string escapes versus augmenting regex escapes
provide some flexibility in the \d
expansion, and maybe the \b
provide convenience methods that handle turning around and calling Pattern.compile or String.matches or whatnot for you
For production release, it should have javadoc and a JUnit test suite. I may include my gigatester, but it’s not written as JUnit tests.
I have good news and bad news.
The good news is that I’ve now got a very close approximation to an extended grapheme cluster to use for an improved \X
.
The bad news ☺ is that that pattern is:
(?:(?:\u000D\u000A)|(?:[\u0E40\u0E41\u0E42\u0E43\u0E44\u0EC0\u0EC1\u0EC2\u0EC3\u0EC4\uAAB5\uAAB6\uAAB9\uAABB\uAABC]*(?:[\u1100-\u115F\uA960-\uA97C]+|([\u1100-\u115F\uA960-\uA97C]*((?:[[\u1160-\u11A2\uD7B0-\uD7C6][\uAC00\uAC1C\uAC38]][\u1160-\u11A2\uD7B0-\uD7C6]*|[\uAC01\uAC02\uAC03\uAC04])[\u11A8-\u11F9\uD7CB-\uD7FB]*))|[\u11A8-\u11F9\uD7CB-\uD7FB]+|[^[\p{Zl}\p{Zp}\p{Cc}\p{Cf}&&[^\u000D\u000A\u200C\u200D]]\u000D\u000A])[[\p{Mn}\p{Me}\u200C\u200D\u0488\u0489\u20DD\u20DE\u20DF\u20E0\u20E2\u20E3\u20E4\uA670\uA671\uA672\uFF9E\uFF9F][\p{Mc}\u0E30\u0E32\u0E33\u0E45\u0EB0\u0EB2\u0EB3]]*)|(?s:.))
which in Java you’d write as:
String extended_grapheme_cluster = "(?:(?:\\u000D\\u000A)|(?:[\\u0E40\\u0E41\\u0E42\\u0E43\\u0E44\\u0EC0\\u0EC1\\u0EC2\\u0EC3\\u0EC4\\uAAB5\\uAAB6\\uAAB9\\uAABB\\uAABC]*(?:[\\u1100-\\u115F\\uA960-\\uA97C]+|([\\u1100-\\u115F\\uA960-\\uA97C]*((?:[[\\u1160-\\u11A2\\uD7B0-\\uD7C6][\\uAC00\\uAC1C\\uAC38]][\\u1160-\\u11A2\\uD7B0-\\uD7C6]*|[\\uAC01\\uAC02\\uAC03\\uAC04])[\\u11A8-\\u11F9\\uD7CB-\\uD7FB]*))|[\\u11A8-\\u11F9\\uD7CB-\\uD7FB]+|[^[\\p{Zl}\\p{Zp}\\p{Cc}\\p{Cf}&&[^\\u000D\\u000A\\u200C\\u200D]]\\u000D\\u000A])[[\\p{Mn}\\p{Me}\\u200C\\u200D\\u0488\\u0489\\u20DD\\u20DE\\u20DF\\u20E0\\u20E2\\u20E3\\u20E4\\uA670\\uA671\\uA672\\uFF9E\\uFF9F][\\p{Mc}\\u0E30\\u0E32\\u0E33\\u0E45\\u0EB0\\u0EB2\\u0EB3]]*)|(?s:.))";
¡Tschüß!
It's really unfortunate that \w
doesn't work. The proposed solution \p{Alpha}
doesn't work for me either.
It seems [\p{L}]
catches all Unicode letters. So the Unicode equivalent of \w
should be [\p{L}\p{Digit}_]
.
In Java, \w
and \d
are not Unicode-aware; they only match the ASCII characters, [A-Za-z0-9_]
and [0-9]
. The same goes for \p{Alpha}
and friends (the POSIX "character classes" they're based on are supposed to be locale-sensitive, but in Java they've only ever matched ASCII characters). If you want to match Unicode "word characters" you you have to spell it out, e.g. [\pL\p{Mn}\p{Nd}\p{Pc}]
,for letters, non-spacing modifiers (accents), decimal digits, and connecting punctuation.
However, Java's \b
is Unicode-savvy; it uses Character.isLetterOrDigit(ch)
and checks for accented letters as well, but the only "connecting punctuation" character it recognizes is the underscore. EDIT: when I try your sample code, it prints ""
and élève"
as it should (see it on ideone.com).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With