Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to define special "untokenizable" words for nltk.word_tokenize

Tags:

tokenize

nltk

I'm using nltk.word_tokenize for tokenizing some sentences which contain programming languages, frameworks, etc., which get incorrectly tokenized.

For example:

>>> tokenize.word_tokenize("I work with C#.")
['I', 'work', 'with', 'C', '#', '.']

Is there a way to enter a list of "exceptions" like this to the tokenizer? I already have compiled a list of all the things (languages, etc.) that I don't want to split.

like image 966
Suilan Estévez Avatar asked Aug 10 '17 16:08

Suilan Estévez


1 Answers

The Multi Word Expression Tokenizer should be what you need.

You add the list of exceptions as tuples and pass it the already tokenized sentences:

tokenizer = nltk.tokenize.MWETokenizer()
tokenizer.add_mwe(('C', '#'))
tokenizer.add_mwe(('F', '#'))
tokenizer.tokenize(['I', 'work', 'with', 'C', '#', '.'])
['I', 'work', 'with', 'C_#', '.']
tokenizer.tokenize(['I', 'work', 'with', 'F', '#', '.'])
['I', 'work', 'with', 'F_#', '.']
like image 177
Suzana Avatar answered Nov 04 '22 02:11

Suzana