I am trying to split strings into lists of "tags" in python. The splitting should handle strings such as "HappyBirthday" and remove most punctuation but preserve hyphens, and apostrophes. My starting point is:
tags = re.findall("([A-Z]{2,}(?=[A-Z]|$)|[A-Z][a-z]*)|\w+-\w+|[\w']+"
I would want to turn this sample data:
Jeff's dog is un-American SomeTimes! BUT NOTAlways
Into:
['Jeff's', 'dog', 'is', 'un-American', 'Some', 'Times', 'BUT', 'NOT', 'Always']
P.S. I am sorry my description isn't very good. I am not sure how to explain it, and have been mostly unsuccessful with google. I hope the example illustrates it properly.
Edit: i think i needed to be more precise, so also,
Regular-Expression Tokenizers. A RegexpTokenizer splits a string into substrings using a regular expression. For example, the following tokenizer forms tokens out of alphabetic sequences, money expressions, and any other non-whitespace sequences: >>> from nltk.
Practical Data Science using Python In Python tokenization basically refers to splitting up a larger body of text into smaller lines, words or even creating words for a non-English language. The various tokenization functions in-built into the nltk module itself and can be used in programs as shown below.
I suggest the following:
re.findall("[A-Z]{2,}(?![a-z])|[A-Z][a-z]+(?=[A-Z])|[\'\w\-]+",s)
This yields for your example:
["Jeff's", 'dog', 'is', 'un-American', 'Some', 'Times', 'BUT', 'NOT', 'Always']
Explanation: The RegExp is made up of 3 alternatives:
[A-Z]{2,}(?![a-z])
matches words with all letters capital[A-Z][a-z]+(?=[A-Z])
matches words with a first capital letter. The lookahead (?=[A-Z])
stops the match before the next capital letter[\'\w\-]+
matches all the rest, i.e. words which may contain '
and -
.To handle your edited cases, I'd modify phynfo (+1) great answer to
>>> s = """Jeff's UN-American Un-American un-American
SomeTimes! BUT NOTAlways This- THIS-
What'sItCalled someTimes"""
>>> re.findall("[A-Z\-\']{2,}(?![a-z])|[A-Z\-\'][a-z\-\']+(?=[A-Z])|[\'\w\-]+",s)
["Jeff's", 'UN-', 'American', 'Un-', 'American', 'un-American',
'Some', 'Times', 'BUT', 'NOT', 'Always', 'This-', 'THIS-',
"What's", 'It', 'Called' 'someTimes']
You have to clearly define the rules for your wanted behaviors. Tokenization isn't a definition, you have to have something similar to phynfo's rules. E.g., you have a rule that 'NOTAlways'
should go to 'NOT'
, and 'Always'
, and that hyphens should be preserved. Thus 'UN-American'
is split up, just like UNAmerican would be split up. You can try defining an additional rules, but you have to be clear about which rule is applied when rules overlap.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With