Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to compare sentence similarities using embeddings from BERT

I am using the HuggingFace Transformers package to access pretrained models. As my use case needs functionality for both English and Arabic, I am using the bert-base-multilingual-cased pretrained model. I need to be able to compare the similarity of sentences using something such as cosine similarity. To use this, I first need to get an embedding vector for each sentence, and can then compute the cosine similarity.

Firstly, what is the best way to extratc the semantic embedding from the BERT model? Would taking the last hidden state of the model after being fed the sentence suffice?

import torch
from transformers import BertModel, BertTokenizer

model_class = BertModel
tokenizer_class = BertTokenizer
pretrained_weights = 'bert-base-multilingual-cased'

tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)

sentence = 'this is a test sentence'

input_ids = torch.tensor([tokenizer.encode(sentence, add_special_tokens=True)])
with torch.no_grad():
    output_tuple = model(input_ids)
    last_hidden_states = output_tuple[0]

print(last_hidden_states.size(), last_hidden_states)

Secondly, if this is a sufficient way to get embeddings from my sentence, I now have another problem where the embedding vectors have different lengths depending on the length of the original sentence. The shapes output are [1, n, vocab_size], where n can have any value.

In order to compute two vectors' cosine similarity, they need to be the same length. How can I do this here? Could something as naive as first summing across axis=1 still work? What other options do I have?

like image 732
KOB Avatar asked Mar 02 '20 16:03

KOB


People also ask

How do you measure similarity between sentences?

The easiest way of estimating the semantic similarity between a pair of sentences is by taking the average of the word embeddings of all words in the two sentences, and calculating the cosine between the resulting embeddings.

Is Bert a sentence embedding?

Sentence-BERT uses a Siamese network like architecture to provide 2 sentences as an input. These 2 sentences are then passed to BERT models and a pooling layer to generate their embeddings. Then use the embeddings for the pair of sentences as inputs to calculate the cosine similarity.

How do you find the cosine similarity between two sentences?

Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. Similarity = (A.B) / (||A||. ||B||) where A and B are vectors.


3 Answers

In addition to an already great accepted answer, I want to point you to sentence-BERT, which discusses the similarity aspect and implications of specific metrics (like cosine similarity) in greater detail. They also have a very convenient implementation online. The main advantage here is that they seemingly gain a lot of processing speed compared to a "naive" sentence embedding comparison, but I am not familiar enough with the implementation itself.

Importantly, there is also generally a more fine-grained distinction in what kind of similarity you want to look at. Specifically for that, there is also a great discussion in one of the task papers from SemEval 2014 (SICK dataset), which goes into more detail about this. From your task description, I am assuming that you are already using data from one of the later SemEval tasks, which also extended this to multilingual similarity.

like image 168
dennlinger Avatar answered Oct 20 '22 18:10

dennlinger


You can use the [CLS] token as a representation for the entire sequence. This token is typically prepended to your sentence during the preprocessing step. This token that is typically used for classification tasks (see figure 2 and paragraph 3.2 in the BERT paper).

It is the very first token of the embedding.

Alternatively you can take the average vector of the sequence (like you say over the first(?) axis), which can yield better results according to the huggingface documentation (3rd tip).

Note that BERT was not designed for sentence similarity using the cosine distance, though in my experience it does yield decent results.

like image 12
Swier Avatar answered Oct 20 '22 19:10

Swier


You should NOT use BERT's output as sentence embeddings for semantic similarity. BERT is not pretrained for semantic similarity, which will result in poor results, even worse than simple Glove Embeddings. See below a comment from Jacob Devlin (first author in BERT's paper) and a piece from the Sentence-BERT paper, which discusses in detail sentence embeddings.

Jacob Devlin's comment: I'm not sure what these vectors are, since BERT does not generate meaningful sentence vectors. It seems that this is is doing average pooling over the word tokens to get a sentence vector, but we never suggested that this will generate meaningful sentence representations. And even if they are decent representations when fed into a DNN trained for a downstream task, it doesn't mean that they will be meaningful in terms of cosine distance. (Since cosine distance is a linear space where all dimensions are weighted equally). (https://github.com/google-research/bert/issues/164#issuecomment-441324222)

From Sentence-BERT paper: The results show that directly using the output of BERT leads to rather poor performances. Averaging the BERT embeddings achieves an average correlation of only 54.81, and using the CLS token output only achieves an average correlation of 29.19. Both are worse than computing average GloVe embeddings. (https://arxiv.org/pdf/1908.10084.pdf)

You should use instead a model pre-trained specifically for sentence similarity, such as Sentence-BERT. Sentence-BERT and several other pretrained models for sentence similarity are available in the sentence-transformers library (https://www.sbert.net/docs/pretrained_models.html), which is fully compatible with the amazing HuggingFace transformers library. With these libraries, you can obtain sentence embeddings in just a line of code.

like image 1
Cristian Arteaga Avatar answered Oct 20 '22 19:10

Cristian Arteaga