Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Word tokenization using python regular expressions

I am trying to split strings into lists of "tags" in python. The splitting should handle strings such as "HappyBirthday" and remove most punctuation but preserve hyphens, and apostrophes. My starting point is:

tags = re.findall("([A-Z]{2,}(?=[A-Z]|$)|[A-Z][a-z]*)|\w+-\w+|[\w']+"

I would want to turn this sample data:

Jeff's dog is un-American SomeTimes! BUT NOTAlways

Into:

['Jeff's', 'dog', 'is', 'un-American', 'Some', 'Times', 'BUT', 'NOT', 'Always']

P.S. I am sorry my description isn't very good. I am not sure how to explain it, and have been mostly unsuccessful with google. I hope the example illustrates it properly.

Edit: i think i needed to be more precise, so also,

  1. if the word is hypenated and capital, like 'UN-American' will it keep it as one word so output would be 'UN-American'
  2. if the hyphen has a space on either or both sides, a la 'THIS- is' or 'This - is' it should ignore the hypen and produce ["THIS", "is"] and ["This", "is"] respecticly,
  3. and simmilarly for an apostrophe if its in the middle of a word like "What'sItCalled" it should produce ["What's","It", "Called"]
like image 547
user779420 Avatar asked Jun 01 '11 14:06

user779420


People also ask

What is Tokenizing in regex?

Regular-Expression Tokenizers. A RegexpTokenizer splits a string into substrings using a regular expression. For example, the following tokenizer forms tokens out of alphabetic sequences, money expressions, and any other non-whitespace sequences: >>> from nltk.

How tokenization is done in Python?

Practical Data Science using Python In Python tokenization basically refers to splitting up a larger body of text into smaller lines, words or even creating words for a non-English language. The various tokenization functions in-built into the nltk module itself and can be used in programs as shown below.


2 Answers

I suggest the following:

re.findall("[A-Z]{2,}(?![a-z])|[A-Z][a-z]+(?=[A-Z])|[\'\w\-]+",s)

This yields for your example:

["Jeff's", 'dog', 'is', 'un-American', 'Some', 'Times', 'BUT', 'NOT', 'Always']

Explanation: The RegExp is made up of 3 alternatives:

  1. [A-Z]{2,}(?![a-z]) matches words with all letters capital
  2. [A-Z][a-z]+(?=[A-Z]) matches words with a first capital letter. The lookahead (?=[A-Z]) stops the match before the next capital letter
  3. [\'\w\-]+ matches all the rest, i.e. words which may contain ' and -.
like image 184
phynfo Avatar answered Sep 27 '22 00:09

phynfo


To handle your edited cases, I'd modify phynfo (+1) great answer to

>>> s = """Jeff's UN-American Un-American un-American 
           SomeTimes! BUT NOTAlways This- THIS- 
           What'sItCalled someTimes"""
>>> re.findall("[A-Z\-\']{2,}(?![a-z])|[A-Z\-\'][a-z\-\']+(?=[A-Z])|[\'\w\-]+",s)
["Jeff's", 'UN-', 'American', 'Un-', 'American', 'un-American', 
 'Some', 'Times', 'BUT', 'NOT', 'Always', 'This-', 'THIS-', 
 "What's", 'It', 'Called' 'someTimes']

You have to clearly define the rules for your wanted behaviors. Tokenization isn't a definition, you have to have something similar to phynfo's rules. E.g., you have a rule that 'NOTAlways' should go to 'NOT', and 'Always', and that hyphens should be preserved. Thus 'UN-American' is split up, just like UNAmerican would be split up. You can try defining an additional rules, but you have to be clear about which rule is applied when rules overlap.

like image 44
dr jimbob Avatar answered Sep 27 '22 00:09

dr jimbob