Managing Users’ Initial Screening Of Your Chatbot
New Users Will Invariably Perform A Capability Check On A Chatbot
Most chatbot conversations follow a very similar pattern…
And recognizing it gives you the opportunity as a designer and developer to not only make provision for possible permutations; but anticipate it.
Regardless of the user interface, structured or unstructured input, the user needs to explore and grasp the interface. With command line interfaces users learn to navigate the interface by knowing commands and through a process of trial and error.
Graphic interfaces can be explored by virtue of the graphic affordances.
When a new interface is introduced, loose patterns of behavior quickly solidifies into models of convention.
Models which designers need to adhere to.
With mobile app interfaces, models of convention were quickly established. Users started to touch the screen, rotate it, expand, pinch, swipe up, swipe down, tap etc. etc.
Natural human conversation contains devices for its own management.
Robert J. Moore
During the screening phase of the conversation, the user is checking the capability of the chatbot, understanding the unseen conversational landscape of the conversational interface.
Irrelevance detection is important, and the chatbot must be able to see if user requests are irrelevant to the designed domain and advise the user accordingly.
With invisible affordances, user discovery is much harder than in the case of a graphic interfaces.
Within the Capability Check phase of the conversation, the user is exploring the capability; the capacity…and the affordances available.
More On Conversational Design…
Structured Versus Unstructured
During this initial discovery process where the user screens the chatbot and checks out the capabilities of the conversational interface, the most appropriate approach is to start the conversation in an unstructured fashion.
It’s best not to start the conversation with a highly structured interface. Keep the conversation unstructured and conversation-like. Each chatbot covers a specific domain and is finite in its capabilities. Don’t funnel the conversation too soon into a Storytelling Sequence. Do not commit the user too soon to a specific journey.
The worst experience is when a user is prematurely committed to a specific customer journey in error. Worst still if there is no digression in place.
Only introduce structure if the Capability Check phase becomes too drawn out; do so in an attempt to create some traction and move the dialog forward. Once the dialog moves along, structure is not that much needed.
A big advantage of delaying structure, is the possibility to deduce bona fide user intents and entities from the user’s input. Allowing for a much richer and accurate response to the users input. Most NLU environments have the ability to detect entities contextually. Not only this, but also detect composite entities or entity roles.
What is an Entity?
The three words used most often in relation to chatbots are utterances, Intents and Entities.
An utterance is really anything the user will say. The utterance can be a sentence, some instances a few sentences, or just a word or a phrase. During the design phase it is anticipate what your users might say to your bot.
An Intent is the user’s intention with their utterance, or engagement with your bot. Think of intents as verbs, or working words. An utterance or single dialog from a user needs to be distilled into an intent.
Entities can be seen as nouns, often they are referred to as slots. These are usually things like date, time, cities, names, brands etc. Capturing these entities are crucial for taking action based on the user’s intent.
Think of a travel bot, capturing the cities of departure, destination, travel mode, price, dates and times is at the foundation of the interface. Yet, this is the hardest part of the Chatbot. Keep in mind the user enters data randomly and in no particular order.
Possible entity types:
· Simple Entities
· Composite Entities
· Entity Roles
· Entity Lists
· Regular Expressions
· Prebuilt Models
Composite Entities As An Example
Not many Chatbot development environments allow for this…but what if you wanted to capture two entities which are related…referred to as a composite entity. Say for instance you have to manage the location of employees, and you have a list of offices and campuses.
Obviously the campus and the office are linked. Perhaps there are a few offices at a campus. Hence here is a scenario of a parent and child entities you need to capture together with the added bonus of not having to ask the user two questions in a fixed journey fashion.
So you would have a “Campus” entity, also an “Office” entity. But you will want to have them linked; hence composite entities. Your NLU returns an entities array. Each entity is given a type and value.
To find more precision for each child entity, use the combination of type and value from the composite array item to find the corresponding item in the entities array.
The user input is all you have to work. Or more precisely, the structured data derived from the unstructured input is all the data you have to work with. Don’t decide too soon what the user wants; don’t default too soon either.
If you have context from previous conversations, all the better.
However, draw the Capability Check (user screening) out as long as possible to derive intents and possibly entities to show the user the chatbot is capable of having a conversation.
Read More Here…
Bot Framework Composer Documentation – Quickstarts, Tutorials, SamplesBot Framework Composer is a visual designer that lets you quickly and easily build sophisticated conversational bots…docs.microsoft.com