Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Custom sentence segmentation using Spacy

I am new to Spacy and NLP. I'm facing the below issue while doing sentence segmentation using Spacy.

The text I am trying to tokenise into sentences contains numbered lists (with space between numbering and actual text), like below.

import spacy
nlp = spacy.load('en_core_web_sm')
text = "This is first sentence.\nNext is numbered list.\n1. Hello World!\n2. Hello World2!\n3. Hello World!"
text_sentences = nlp(text)
for sentence in text_sentences.sents:
    print(sentence.text)

Output (1.,2.,3. are considered as separate lines) is:

This is first sentence.
  
Next is numbered list.
    
1.
Hello World!
 
2.
Hello World2!
  
3.
Hello World!

But if there is no space between numbering and actual text, then sentence tokenisation is fine. Like below:

import spacy
nlp = spacy.load('en_core_web_sm')
text = "This is first sentence.\nNext is numbered list.\n1.Hello World!\n2.Hello World2!\n3.Hello World!"
text_sentences = nlp(text)
for sentence in text_sentences.sents:
    print(sentence.text)

Output(desired) is:

This is first sentence.
    
Next is numbered list.
   
1.Hello World!
    
2.Hello World2!
    
3.Hello World!

Please suggest whether we can customise sentence detector to do this.

like image 690
Satheesh K Avatar asked Sep 06 '18 13:09

Satheesh K


1 Answers

When you use a pretrained model with spacy, the sentences get splitted based on training data that were provided during the training procedure of the model.

Of course, there are cases like yours, that may somebody want to use a custom sentence segmentation logic. This is possible by adding a component to spacy pipeline.

For your case, you can add a rule that prevents sentence splitting when there is a {number}. pattern.

A workaround for your problem:

import spacy
import re

nlp = spacy.load('en')
boundary = re.compile('^[0-9]$')

def custom_seg(doc):
    prev = doc[0].text
    length = len(doc)
    for index, token in enumerate(doc):
        if (token.text == '.' and boundary.match(prev) and index!=(length - 1)):
            doc[index+1].sent_start = False
        prev = token.text
    return doc

nlp.add_pipe(custom_seg, before='parser')
text = u'This is first sentence.\nNext is numbered list.\n1. Hello World!\n2. Hello World2!\n3. Hello World!'
doc = nlp(text)
for sentence in doc.sents:
    print(sentence.text)

Hope it helps!

like image 182
gdaras Avatar answered Oct 16 '22 18:10

gdaras