For example: If the user writes in the Watson Conversation Service:
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo"
How you can know that user doesn't want to have a pool, but he loves to live in a Condo?
IBM Watson Assistant uses artificial intelligence that understands customers in context to provide fast, consistent, and accurate answers across any application, device, or channel. Remove the frustration of long wait times, tedious searches, and unhelpful chatbots with the leader in trustworthy AI.
This is a good question and yeah this is a bit tricky...
Currently your best bet is to provide as much examples of the utterances that should be classified as a particular intent as a training examples for that intent - the more examples you provide the more robust the NLU (natural language understanding) will be.
Having said that, note that using examples such as:
"I would want to have a pool in my new house, but I wouldn't love to live in a Condo"
for intent-pool
and
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo"
for intent-condo
will make the system to correctly classify these sentences, but the confidence difference between these might be quite small (because of the reason they are quite similar when you look just at the text).
So the question here is whether it is worth to make the system classify such intents out-of-the-box or instead train the system on more simple examples and use some form of disambiguation if you see the top N intents have low confidence differences.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With