Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to tokenize continuous words with no whitespace delimiters?

I'm using Python with nltk. I need to process some text in English without any whitespace, but word_tokenize function in nltk couldn't deal with problems like this. So how to tokenize text without any whitespace. Is there any tools in Python?

like image 215
VcamX Avatar asked Jul 14 '13 06:07

VcamX


1 Answers

I am not aware of such tools, but the solution of your problem depends on the language.

For the Turkish language you can scan input text letter by letter and accumulate letters into a word. When you are sure that accumulated word forms a valid word from a dictionary, you save it as a separate token, erase the buffer for accumulating new word and continue the process.

You can try this for English, but I assume that you may find situations when ending of one word may be a beginning of some dictionary word, and this can cause you some problems.

like image 54
Ivan Mushketyk Avatar answered Sep 23 '22 22:09

Ivan Mushketyk