Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How is WordPiece tokenization helpful to effectively deal with rare words problem in NLP?

I have seen that NLP models such as BERT utilize WordPiece for tokenization. In WordPiece, we split the tokens like playing to play and ##ing. It is mentioned that it covers a wider spectrum of Out-Of-Vocabulary (OOV) words. Can someone please help me explain how WordPiece tokenization is actually done, and how it handles effectively helps to rare/OOV words?

like image 910
Harman Avatar asked Mar 27 '19 16:03

Harman


People also ask

How does WordPiece tokenization work?

One such subword tokenization technique that is commonly used and can be applied to many other NLP models is called WordPiece. Given text, WordPiece first pre-tokenizes the text into words (by splitting on punctuation and whitespaces) and then tokenizes each word into subword units, called wordpieces.

Why tokenization is important in NLP?

That's why tokenization is a foundational step in Natural Language Processing. This process is important because the meaning of the text can be interpreted through analysis of the words present in the text. Tokenization is the process of breaking apart original text into individual pieces (tokens) for further analysis.

What is a token in tokenization in context of natural language processing?

Tokens are the building blocks of Natural Language. Tokenization is a way of separating a piece of text into smaller units called tokens. Here, tokens can be either words, characters, or subwords. Hence, tokenization can be broadly classified into 3 types – word, character, and subword (n-gram characters) tokenization.

How does wordpiece tokenization work?

In WordPiece, we split the tokens like playing to play and ##ing. It is mentioned that it covers a wider spectrum of Out-Of-Vocabulary (OOV) words. Can someone please help me explain how WordPiece tokenization is actually done, and how it handles effectively helps to rare/OOV words? Show activity on this post.

What is the use of wordpiece in NLP?

I have seen that NLP models such as BERT utilize WordPiece for tokenization. In WordPiece, we split the tokens like playing to play and ##ing. It is mentioned that it covers a wider spectrum of Out-Of-Vocabulary (OOV) words.

What is subword-based tokenization and how does it work?

The idea behind subword tokenization is that frequently occurring words should be in the vocabulary, whereas rare words should be split into frequent sub words. Subword-based tokenization lies between character and word-based tokenization.

What is tokenization in NLP?

What is tokenization? Tokenization is one of the first steps in NLP, and it’s the task of splitting a sequence of text into units with semantic meaning. These units are called tokens, and the difficulty in tokenization lies on how to get the ideal split so that all the tokens in the text have the correct meaning, and there are no left out tokens.


1 Answers

WordPiece and BPE are two similar and commonly used techniques to segment words into subword-level in NLP tasks. In both cases, the vocabulary is initialized with all the individual characters in the language, and then the most frequent/likely combinations of the symbols in the vocabulary are iteratively added to the vocabulary.

Consider the WordPiece algorithm from the original paper (wording slightly modified by me):

  1. Initialize the word unit inventory with all the characters in the text.
  2. Build a language model on the training data using the inventory from 1.
  3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model.
  4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold.

The BPE algorithm only differs in Step 3, where it simply chooses the new word unit as the combination of the next most frequently occurring pair among the current set of subword units.

Example

Input text: she walked . he is a dog walker . i walk

First 3 BPE Merges:

  1. w a = wa
  2. l k = lk
  3. wa lk = walk

So at this stage, your vocabulary includes all the initial characters, along with wa, lk, and walk. You usually do this for a fixed number of merge operations.

How does it handle rare/OOV words?

Quite simply, OOV words are impossible if you use such a segmentation method. Any word which does not occur in the vocabulary will be broken down into subword units. Similarly, for rare words, given that the number of subword merges we used is limited, the word will not occur in the vocabulary, so it will be split into more frequent subwords.

How does this help?

Imagine that the model sees the word walking. Unless this word occurs at least a few times in the training corpus, the model can't learn to deal with this word very well. However, it may have the words walked, walker, walks, each occurring only a few times. Without subword segmentation, all these words are treated as completely different words by the model.

However, if these get segmented as walk@@ ing, walk@@ ed, etc., notice that all of them will now have walk@@ in common, which will occur much frequently while training, and the model might be able to learn more about it.

like image 174
Proyag Avatar answered Sep 19 '22 08:09

Proyag