Creating Intent Recommendations For Your Chatbot

Creating Intent Recommendations For Your Chatbot

And How To Identify Gaps in Training Data & Fix It

Introduction

…also, how to do this using AI to enhance Conversational AI

Most often the first step in creating a chatbot is listing the different intents. Intents are really the different intentions a user might want to exercise in using your chatbot.

Listing Of Intents Within IBM Watson Assistant

From this example Customer Care Sample Skill, the different intents are clearly care related to each other.

The first intent addressed, usually is the greeting, then the goodbye, followed by small talk.

The key is to segment the intents accurately, and not have conflicts. And not have too many or too little intents.

If there are too many intents, conflicts are inevitable, with duplication of data. If there are too little, the chatbot will invariably not address the query and need of the user. You will start seeing scenarios like fallback proliferation.

This is the challenge, having the intents granular, but not too granular. And also the inverse is true.

Intent Recommendation

The real-life scenario is too often a group of people sitting together and deciding on what the user might say and how they might say it.

The Guessing Game We Often Play With User Intents

Too often this is really an approximation which will demand painful iterations to enlarge the overlap. The ideal avenue is having existing customer conversations which can be analyzed to discover and group user intents.

An example of this can be a list of phrases where users asks about flight times. Indeed this can be one intent, #Ask_Flight_Times. But what if we could take the list of example requests and have a machine group them into entity types?

Below is a spreadsheet extract of travelers asking about flight times.

Flight Time Request Phrases

This list of phrases were added as a data source to a IBM Watson Assistant skill. In additional to this, recommended intents and intent examples can be sourced from connected live assistants and CSV files.

Generated Intent Recommendations From Example Data

Watson identified nine different intents from the source data which centers around the various ways users can phrase the question. From the auto-generated recommendations it is evident how the results need to be presented to the user.

For instance:

  • A schedule,
  • or list,
  • or a time for a specific flight.
  • Also airline and day related.

Data Source

For this example our data source is from ATIS Airline Travel Information System. There are various groupings of data, but we are only going to have a look at the atis_flight_time grouping. This constitutes fifty-five recordings.

Test Environment

The test environment is IBM Watson Assistant Plus. Once we have uploaded the the data, Watson evaluates the user utterances and identifies common problem areas that customers mention frequently.

Watson Assistant then displays a set of discrete groups of related candidate user examples that capture the trending customer needs. The groups are so granular in scope that you might choose to add separately-grouped candidate user examples to the same intent.

You can review each recommended intent and the corresponding user examples to choose the ones you want to add to your training data.

Loading Your Source Data

Within the main Watson Assistant console select the tab Recommendation sources. This will take you to the area where you manage your source data.

Managing Recommendation Sources Within Watson

Again, the source data can be from current live conversations. Hence the log from an assistant that is deployed and is actively interacting with customers.

The alternative is uploading a CSV file, or multiple files. The upload process obviously depends to the size of the data set.

Defining Recommendation Sources

The list of utterances can be extracted from support center chat transcripts or other related, recorded customer conversations within the organization.

You will have to decide on a recommendation source type; you cannot have both.

The recommendation source data that you add is used to derive both intent and intent example recommendations.

This is an important feature as the data is grouped into intents. And within each of these defined intents, a list is made by Watson Assistant which constitutes the user examples.

The name of the intents created can be cryptic. It’s best to rename it to something more intelligible for future reference. Obviously you are at liberty to use these generated values a guide and edit and update them as you see fit.

By merely looking at the way Watson Assistant organizes the data will already spark many insights and ideas on organizing the data.

Intent Examples

Examples listed under each intent allows for exposure of that particular intent and increase the accuracy of your model in assigning a user utterance. The number aimed for is between 10 and 20 example utterances for each intent.

Searching for Recommendations on an Intent with no Relative Information

When examining an intent, even a simple one like #Cancel, where the user can cancel any current activity…And searching for user example recommendations, no results could be generated.

This is an accurate result as none of the uploaded reference data related to the #cancel intent.

Again, the aim is to accurately increase the coverage of the intent, and hence enrich the model.

When we select an intent which is related to the training data available, results are yielded.

In the image below you can see the recommended phrases and these can be selected and added to the intent.

What I like about this approach is that it is measured and with full supervision from the user. Data can be uploaded in bulk, but the inspecting and adding of recommendations are manual allowing for a consistent and controlled augmentation of the skill.

Recommended User Example List & Adding Items From the List

From the examples the positioning and possible values of entities are also visible. Entities can be selected within the intent example and defined.

In the example here, the two words “san” and “francisco” can be defined as one entity and assigned to the entity type city. Perhaps even a specific entity related to the context of city of departure or city of arrival can be assigned.

Defining Entities via Contextual Annotation

Ideas for what might constitute an entity can be gleaned from the data generated. But as contextual entities are growing in popularity and use, the context can also be understood and implemented.

Conclusion

The process of establishing a conversational User Interface needs to be planned and executed very deliberately. Starting wrong will frustrate users and result in a prolonged process of improving the chatbot.

Data is important, the framework used is also important, and being able to scale on demand is crucial.



Read More Here…

Cobus GreylingHi, I’m Cobus… Currently I conceptualize, mock-up, wire-frame, prototype and develop final products. Primarily…cobusgreyling.me

Cobus Greyling – MediumRead writing from Cobus Greyling on Medium. NLP/NLU, Chatbots, Voice, Conversational UI/UX, CX Designer, Developer…medium.com

IBM Cloud DocsEdit descriptioncloud.ibm.comIBM Cloud DocsEdit descriptioncloud.ibm.com

ATIS Airline Travel Information SystemATIS Intent Classification Datasetwww.kaggle.com

Leave a Reply

Your email address will not be published. Required fields are marked *