Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I prevent spacy's tokenizer from splitting a specific substring when tokenizing a string?

How can I prevent spacy's tokenizer from splitting a specific substring when tokenizing a string?

More specifically, I have this sentence:

Once unregistered, the folder went away from the shell.

which gets tokenized as [Once/unregistered/,/the/folder/went/away/from/the/she/ll/.] by scapy 1.6.0. I don't want the substring shell to be cut into two different tokens she and ll.


Here is the code I use:

# To install spacy:
# sudo pip install spacy
# sudo python -m spacy.en.download parser # will take 0.5 GB

import spacy
nlp = spacy.load('en')

# https://spacy.io/docs/usage/processing-text
document = nlp(u'Once unregistered, the folder went away from the shell.')

for token in document:
    print('token.i: {2}\ttoken.idx: {0}\ttoken.pos: {3:10}token.text: {1}'.
      format(token.idx, token.text,token.i,token.pos_))

which outputs:

token.i: 0      token.idx: 0    token.pos: ADV       token.text: Once
token.i: 1      token.idx: 5    token.pos: ADJ       token.text: unregistered
token.i: 2      token.idx: 17   token.pos: PUNCT     token.text: ,
token.i: 3      token.idx: 19   token.pos: DET       token.text: the
token.i: 4      token.idx: 23   token.pos: NOUN      token.text: folder
token.i: 5      token.idx: 30   token.pos: VERB      token.text: went
token.i: 6      token.idx: 35   token.pos: ADV       token.text: away
token.i: 7      token.idx: 40   token.pos: ADP       token.text: from
token.i: 8      token.idx: 45   token.pos: DET       token.text: the
token.i: 9      token.idx: 49   token.pos: PRON      token.text: she
token.i: 10     token.idx: 52   token.pos: VERB      token.text: ll
token.i: 11     token.idx: 54   token.pos: PUNCT     token.text: .
like image 589
Franck Dernoncourt Avatar asked Jan 26 '17 03:01

Franck Dernoncourt


1 Answers

spacy allows to add exceptions to the tokenizer.

Adding an exception to prevent the string shell from being split by the tokenizer can be done with nlp.tokenizer.add_special_case as follows:

import spacy
from spacy.symbols import ORTH, LEMMA, POS
nlp = spacy.load('en')

nlp.tokenizer.add_special_case(u'shell',
    [
        {
            ORTH: u'shell',
            LEMMA: u'shell',
            POS: u'NOUN'}
     ])

# https://spacy.io/docs/usage/processing-text
document = nlp(u'Once unregistered, the folder went away from the shell.')

for token in document:
    print('token.i: {2}\ttoken.idx: {0}\ttoken.pos: {3:10}token.text: {1}'.
      format(token.idx, token.text,token.i,token.pos_))

which outputs:

token.i: 0      token.idx: 0    token.pos: ADV       token.text: Once
token.i: 1      token.idx: 5    token.pos: ADJ       token.text: unregistered
token.i: 2      token.idx: 17   token.pos: PUNCT     token.text: ,
token.i: 3      token.idx: 19   token.pos: DET       token.text: the
token.i: 4      token.idx: 23   token.pos: NOUN      token.text: folder
token.i: 5      token.idx: 30   token.pos: VERB      token.text: went
token.i: 6      token.idx: 35   token.pos: ADV       token.text: away
token.i: 7      token.idx: 40   token.pos: ADP       token.text: from
token.i: 8      token.idx: 45   token.pos: DET       token.text: the
token.i: 9      token.idx: 49   token.pos: NOUN      token.text: shell
token.i: 10     token.idx: 54   token.pos: PUNCT     token.text: .
like image 155
Franck Dernoncourt Avatar answered Nov 12 '22 11:11

Franck Dernoncourt