Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Measuring semantic similarity between two phrases [closed]

Tags:

I want to measure semantic similarity between two phrases/sentences. Is there any framework that I can use directly and reliably?

I have already checked out this question, but its pretty old and I couldn't find real helpful answer there. There was one link, but I found this unreliable.

e.g.:
I have a phrase: felt crushed
I have several choices: force inwards,pulverized, destroyed emotionally, reshaping etc.
I want to find the term/phrase with highest similarity to the first one.
The answer here is: destroyed emotionally.

The bigger picture is: I want to identify which frame from FrameNet matches to the given verb as per its usage in a sentence.

Update : I found this library very useful for measuring similarity between two words. Also the ConceptNet similarity mechanism is very good.

and this library for measuring semantic similarity between sentences

If anyone has any insights please share.

like image 925
voidMainReturn Avatar asked Apr 25 '13 01:04

voidMainReturn


People also ask

How do you find the semantic similarity between two sentences?

The easiest way of estimating the semantic similarity between a pair of sentences is by taking the average of the word embeddings of all words in the two sentences, and calculating the cosine between the resulting embeddings.

How do you calculate semantic similarity?

To calculate the semantic similarity between words and sentences, the proposed method follows an edge-based approach using a lexical database. The methodology can be applied in a variety of domains. The methodology has been tested on both benchmark standards and mean human similarity dataset.

What is semantic similarity search?

Humans determine the similarity between texts based on the similarity of the composing words and their abstract meaning. Documents containing by similar words are semantically related, and words frequently co-occurring are also considered close. The plugin supports document and term searches.


1 Answers

This is a very complicated problem.

The main technique that I can think of (before going into more complicated NLP processes) would be to apply cosine (or any other metric) similarity to each pair of phrases. Obviously this solution would be very inefficient at the moment due to the non-matching problem: The sentences might refer to the same concept with different words.

To solve this issue, you should transform the initial representation of each phrase with a more "conceptual" meaning. One option would be to extend each word with its synonyms (i.e. using WordNet, another option is to apply metrics such as distributional semantics DS (http://liawww.epfl.ch/Publications/Archive/Besanconetal2001.pdf) that extend the representation of each term with the more likely words to appear with it.

Example: A representation of a document: {"car","race"} would be transform to {"car","automobile","race"} with synonyms. While, with DS it would be something like: {"car","wheel","road","pilot", ...}

Obviously this transformation won't be binary. Each term will have some associated weights.

I hope this helps.

like image 190
miguelmalvarez Avatar answered Oct 19 '22 05:10

miguelmalvarez