I have written 2 REs to match several string sequences in a String. for e.g. lets assume the two regular expressions are RE1
, RE2
. The strings can be in these 4 forms;
1) Match ONLY RE1 'one or more times' 2) Match ONLY RE2 'one or more times' 3) Match RE1 'one or more times' AND match RE2 'one or more times' 4) Match NEITHER RE1 NOR RE2
currently I am using if
to check each of these, but I know its very expensive as I am doing the matching for a particular string several times. I thought of using 'or' |
but the problem with that is regex will stop matching once it finds the first matching sequence and not continue to find others. I want to find matching sequences 'one or more times'.
Update:
eg: RE1 = (\d{1,3}[a-zA-Z]?/\d{1,3}[a-zA-Z]?) RE2 = (\babc\b) String: *some string* 100/64h *some string* 120h/90 *some string* abc 200/100 abc *some string* 100h/100f Matches: '100/64h', '120h/90', 'abc', '200/100', 'abc', '100h/100f'
How can I merge these 2 REs to make my program efficient. I am using python to code this.
made this to find all with multiple #regular #expressions. regex1 = r"your regex here" regex2 = r"your regex here" regex3 = r"your regex here" regexList = [regex1, regex1, regex3] for x in regexList: if re. findall(x, your string): some_list = re. findall(x, your string) for y in some_list: found_regex_list.
Concatenation: If R1 and R2 are regular expressions, then R1R2 (also written as R1. R2) is also a regular expression. L(R1R2) = L(R1) concatenated with L(R2). Kleene closure: If R1 is a regular expression, then R1* (the Kleene closure of R1) is also a regular expression.
Chaining regular expressions Regular expressions can be chained together using the pipe character (|). This allows for multiple search options to be acceptable in a single regex string.
By default, the count is set to zero, which means the re. sub() method will replace all pattern occurrences in the target string.
You say "I know its very expensive as I am doing the matching for a particular string several times." That suggests to me that you are running each RE several times. In that case, you are making a mistake that can be resolved without writing a more complex RE.
re1_matches = re.findall(re1, text)
re2_matches = re.findall(re2, text)
This will result in two lists of matches. You can then perform boolean operations on those lists to generate whatever results you need; or you can concatenate them if you need all the matches in one list. You could also use re.match
(match anchored at beginning of string) or re.search
(match anywhere in the string) for each of these if you don't need lists of results, but only need to know that there's a match.
In any case, creating a more complex RE in this case is probably not necessary or desirable.
But it's not immediately clear to me exactly what you want, so I could be wrong about that.
Some suggestions about how to use boolean operators to process lists. First some setup:
>>> re1 = r'(\d{1,3}[a-zA-Z]?/\d{1,3}[a-zA-Z]?)'
>>> re2 = r'(\babc\b)'
>>> re.findall(re1, text)
['100/64h', '120h/90', '200/100', '100h/100f']
>>> re.findall(re2, text)
['abc', 'abc']
>>> re1_matches = re.findall(re1, text)
>>> re2_matches = re.findall(re2, text)
>>> rex_nomatch = re.findall('conglomeration_of_sandwiches', text)
and
returns the first False result or the final result if all results are True.
>>> not re1_matches and re2_matches
False
So if you want the list and not a flat boolean, you have to test the result you want last:
>>> not rex_nomatch and re1_matches
['100/64h', '120h/90', '200/100', '100h/100f']
Similarly:
>>> not rex_nomatch and re2_matches
['abc', 'abc']
If you just want to know that both REs generated matches, but don't need any more, you can do this:
>>> re1_matches and re2_matches
['abc', 'abc']
Finally, here's a compact way to get the concatenation if both REs generate matches:
>>> re1_matches and re2_matches and re1_matches + re2_matches
['100/64h', '120h/90', '200/100', '100h/100f', 'abc', 'abc']
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With