Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

NLTK - nltk.tokenize.RegexpTokenizer - regex not working as expected

I am trying to Tokenize text using RegexpTokenizer.

Code:

from nltk.tokenize import RegexpTokenizer
#from nltk.tokenize import word_tokenize

line = "U.S.A Count U.S.A. Sec.of U.S. Name:Dr.John Doe J.Doe 1.11 1,000 10--20 10-20"
pattern = '[\d|\.|\,]+|[A-Z][\.|A-Z]+\b[\.]*|[\w]+|\S'
tokenizer = RegexpTokenizer(pattern)

print tokenizer.tokenize(line)
#print word_tokenize(line)

Output:

['U', '.', 'S', '.', 'A', 'Count', 'U', '.', 'S', '.', 'A', '.', 'Sec', '.', 'of', 'U', '.', 'S', '.', 'Name', ':', 'Dr', '.', 'John', 'Doe', 'J', '.', 'Doe', '1.11', '1,000', '10', '-', '-', '20', '10', '-', '20']

Expected Output:

['U.S.A', 'Count', 'U.S.A.', 'Sec', '.', 'of', 'U.S.', 'Name', ':', 'Dr', '.', 'John', 'Doe', 'J.', 'Doe', '1.11', '1,000', '10', '-', '-', '20', '10', '-', '20']

Why tokenizer is also spiltting my expected tokens "U.S.A" , "U.S."? How can I resolve this issue?

My regex : https://regex101.com/r/dS1jW9/1

like image 816
RAVI Avatar asked Aug 25 '16 12:08

RAVI


1 Answers

The point is that your \b was a backspace character, you need to use a raw string literal. Also, you have literal pipes in the character classes that would also mess your output.

This works as expected:

>>> pattern = r'[\d.,]+|[A-Z][.A-Z]+\b\.*|\w+|\S'
>>> tokenizer = RegexpTokenizer(pattern)
>>> print(tokenizer.tokenize(line))

['U.S.A', 'Count', 'U.S.A.', 'Sec', '.', 'of', 'U.S.', 'Name', ':', 'Dr', '.', 'John', 'Doe', 'J.', 'Doe', '1.11', '1,000', '10', '-', '-', '20', '10', '-', '20']

Note that putting a single \w into a character class is pointless. Also, you do not need to escape every non-word char (like a dot) in the character class as they are mostly treated as literal chars there (only ^, ], - and \ require special attention).

like image 129
Wiktor Stribiżew Avatar answered Sep 18 '22 17:09

Wiktor Stribiżew