Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Dialogflow matches irrelevant phrases to existing intents

I created a chatbot which informs the user about the names of the members of my (extended) family and about where they are the living. I have created a small database with MySQL which has these data stored and I fetch them with a PHP script whenever this is appropriate depending on the interaction of the user with the chatbot.

For this reason, I have created two intents additionally to the Default Fallback Intent and to the Default Welcome Intent:

  • Names
  • Location_context

The first intent ('Names') is trained by phrases such as 'What is the name of your uncle?' and has an output context. The second intent ('Location_context') is trained by phrases such as 'Where is he living?', 'Where is he based?', 'Where is he located?' 'Which city does he live in?' etc and has an input context (from 'Names').

In general, this basic chatbot works well for what it is made for. However, my problem is that (after the 'Names' intent is triggered) if you ask something nonsensical such as 'Where is he snowing?' then the chatbot will trigger the 'Location_context' intent and the chatbot will respond (as it is defined) that 'Your uncle is living in New York'. Also let me mention that as I have structured the chatbot so far this kind of responses are getting a score higher than 0.75 which is pretty high.

How can I make my chatbot to trigger the Default Fallback Intent in these nonsensical questions (or even in more reasonable questions such as 'Where is he eating?' which are not however exactly related with the 'Location context' intent) and not trigger intents such as the 'Location_context' which simply contain some similar keywords to it such as the word 'Where'?

like image 353
Outcast Avatar asked Jan 28 '23 09:01

Outcast


1 Answers

Try playing around with ML CLASSIFICATION THRESHOLD in your agent settings (Settings > ML Settings). By default it comes with a very low score (0.2), which is a little aggressive.

Define the threshold value for the confidence score. If the returned value is less than the threshold value, then a fallback intent will be triggered or, if there is no fallback intents defined, no intent will be triggered.

You can see the score for your query in the JSON response:

{
    "source": "agent",
    "resolvedQuery": "Which city does he live at?",
    "metadata": {
        "intentId": "...",
        "intentName": "Location_context"
    },
    "fulfillment": {
        "speech": "Your uncle is living in New York",
        "messages": [{
            "type": 0,
            "speech": "Your uncle is living in New York"
        }]
    },
    "score": 0.9
}

Compare the scores between the right and wrong matches and you will have a good idea of which confident score is the right one for your agent.

After changing this settings, let it train, try again, and adjust it until it meets your needs.

Update

For queries that still will get a high score, like Where is he cooking?, you could add another intent, custom fallback, to handle those false positives, maybe with a custom entity: NonLocationActions, and use the template mode (@) in user expressions.

  • where is he @NonLocationActions:NonLocationActions
  • which city does he @NonLocationActions:NonLocationActions

enter image description here

So these queries will get 1 score in the new custom fallback, instead of getting 0.7 in the location intent.

like image 140
Marcos Casagrande Avatar answered Jan 30 '23 23:01

Marcos Casagrande