I have to parse some strings based on PCRE in Python, and I've no idea how to do that.
Strings I want to parse looks like:
match mysql m/^.\0\0\0\n(4\.[-.\w]+)\0...\0/s p/MySQL/ i/$1/
In this example, I have to get this different items:
"m/^.\0\0\0\n(4\.[-.\w]+)\0...\0/s" ; "p/MySQL/" ; "i/$1/"
The only thing I've found relating to PCRE manipulation in Python is this module: http://pydoc.org/2.2.3/pcre.html (but it's written it's a .so file ...)
Do you know if some Python module exists to parse this kind of string?
Python supports essentially the same regular expression syntax as Perl, as far as the regular expressions themselves. However, the syntax for using regular expressions is substantially different.
Perl uses Perl regular expressions, not POSIX ones. You can compare the syntaxes yourself, for example in regex(7) .
There are some really subtle issues with how Python deals with, or fails to deal with, non-ASCII in patterns and strings. Worse, these disparities vary substantially according, not just to which version of Python you are using, but also whether you have a “wide build”.
In general, when you’re doing Unicode stuff, Python 3 with a wide build works best and Python 2 with a narrow build works worst, but all combinations are still a pretty far cry far from how Perl regexes work vis‐à‐vis Unicode. If you’re looking for ᴘᴄʀᴇ patterns in Python, you may have to look a bit further afield than its old re
module.
The vexing “wide-build” issues have finally been fixed once and for all — provided you use a sufficiently advanced release of Python. Here’s an excerpt from the v3.3 release notes:
Functionality
Changes introduced by PEP 393 are the following:
- Python now always supports the full range of Unicode codepoints, including non-BMP ones (i.e. from U+0000 to U+10FFFF). The distinction between narrow and wide builds no longer exists and Python now behaves like a wide build, even under Windows.
- With the death of narrow builds, the problems specific to narrow builds have also been fixed, for example:
len()
now always returns 1 for non-BMP characters, solen('\U0010FFFF') == 1
;- surrogate pairs are not recombined in string literals, so
'\uDBFF\uDFFF' != '\U0010FFFF'
;- indexing or slicing non-BMP characters returns the expected value, so
'\U0010FFFF'[0]
now returns'\U0010FFFF'
and not'\uDBFF'
;- all other functions in the standard library now correctly handle non-BMP codepoints.
- The value of
sys.maxunicode
is now always 1114111 (0x10FFFF in hexadecimal). ThePyUnicode_GetMax()
function still returns either 0xFFFF or 0x10FFFF for backward compatibility, and it should not be used with the new Unicode API (see issue 13054).The ./configure
flag--with-wide-unicode
has been removed.
In contrast to what’s currently available in the standard Python distribution’s re
library, Matthew Barnett’s regex
module for both Python 2 and Python 3 alike is much, much better in pretty much all possible ways and will quite probably replace re
eventually. Its particular relevance to your question is that his regex
library is far more ᴘᴄʀᴇ (i.e. it’s much more Perl‐compatible) in every way than re
now is, which will make porting Perl regexes to Python easier for you. Because it is a ground‐up rewrite (as in from‐scratch, not as in hamburger :), it was written with non-ASCII in mind, which re
was not.
The regex
library therefore much more closely follows the (current) recommendations of UTS#18: Unicode Regular Expressions in how it approaches things. It meets or exceeds the UTS#18 Level 1 requirements in most if not all regards, something you normally have to use the ICU regex library or Perl itself for — or if you are especially courageous, the new Java 7 update to its regexes, as that also conforms to the Level One requirements from UTS#18.
Beyond meeting those Level One requirements, which are all absolutely essential for basic Unicode support, but which are not met by Python’s current re
library, the awesome regex
library also meets the Level Two requirements for RL2.5 Named Characters (\N{...})
), RL2.2 Extended Grapheme Clusters (\X
), and the new RL2.7 on Full Properties from revision 14 of UTS#18.
Matthew’s regex
module also does Unicode casefolding so that case insensitive matches work reliably on Unicode, which re
does not.
The following is no longer true, because regex
now supports full Unicode casefolding, like Perl and Ruby.
One super‐tiny difference is that for now, Perl’s case‐insensitive patterns use full string‐oriented casefolds while hisregex
module still uses simple single‐char‐oriented casefolds, but this is something he’s looking into. It’s actually a very hard problem, one which apart from Perl, only Ruby even attempts.
Under full casefolding, this means that (for example) "ß"
now correct matches "SS"
, "ss"
, "ſſ"
, "ſs"
(etc.) when case-insensitive matching is selected. (This is admittedly more important in the Greek script than the Latin one.)
See also the slides or doc source code from my third OSCON2011 talk entitled “Unicode Support Shootout: The Good, the Bad, and the (mostly) Ugly” for general issues in Unicode support across JavaScript, PHP, Go, Ruby, Python, Java, and Perl. If can’t use either Perl regexes or possibly the ICU regex library (which doesn’t have named captures, alas!), then Matthew’s regex
for Python is probably your best shot.
Nᴏᴛᴀ Bᴇɴᴇ s.ᴠ.ᴘ. (= s’il vous plaît, et même s’il ne vous plaît pas :) The following unsolicited noncommercial nonadvertisement was not actually put here by the author of the Python regex
library. :)
regex
FeaturesThe Python regex
library has a cornucopeia of superneat features, some of which are found in no other regex system anywhere. These make it very much worth checking out no matter whether you happen to be using it for its ᴘᴄʀᴇ‐ness or its stellar Unicode support.
A few of this module’s outstanding features of interest are:
ismx
‐type options, so that (?i:foo)
only casefolds for foo, not overall, or (?-i:foo)
to turn it off just on foo. This is how Perl works (or can).agrep
and glimpse
also have)\L<list>
interpolation\m
, \M
)\R
per RL1.6.(\w+\s+)+
where you can get all separate matches of the first group not just its last match. (I believe C# might also do this.)@+
and @-
arrays.(?|...|...|...|)
to reset group numbering in each branch the way it works in Perl.\w
, \b
, \s
, and such work on Unicode.\X
for graphemes.\G
continuation point assertion.re
only has 32‐bit indices).Ok, that’s enough hype. :)
One final alternative that is worth looking at if you are a regex geek is the Python library bindings to Russ Cox’s awesome RE2 library. It also supports Unicode natively, including simple char‐based casefolding, and unlike re
it notably provides for both the Unicode General Category and the Unicode Script character properties, which are the two key properties you most often need for the simpler kinds of Unicode processing.
Although RE2 misses out on a few Unicode features like \N{...}
named character support found in ICU, Perl, and Python, it has extremely serious computational advantages that make it the regex engine of choice whenever you’re concern with starvation‐based denial‐of‐service attacks through regexes in web queries and such. It manages this by forbidding backreferences, which cause a regex to stop being regular and risk super‐exponential explosions in time and space.
Library bindings for RE2 are available not just for C/C++ and Python, but also for Perl and most especially for Go, where it is slated to very shortly replace the standard regex library there.
You're looking for '(\w/[^/]+/\w*)'
.
Used like so,
import re x = re.compile('(\w/[^/]+/\w*)') s = 'match mysql m/^.\0\0\0\n(4\.[-.\w]+)\0...\0/s p/MySQL/ i/$1/' y = x.findall(s) # y = ['m/^.\x00\x00\x00\n(4\\.[-.\\w]+)\x00...\x00/s', 'p/MySQL/', 'i/$1/']
Found it while playing with Edi Weitz's Regex Coach, so thanks to the comments to the question which made me remember its existence.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With