Better Than the Best Customer Service

Jerry Gregoire of Dell once said, “The customer experience is the next competitive battleground.” The bottom line: exceptional customer service skills are crucial for building a company’s brand and reputation. With competitive quality and pricing no longer an issue for most products and services, it’s the customer service jobs that win over sales, good PR, and loyalty. With that being said, the ideal customer service scenario involves a knowledgeable customer working with an eager-to-help human. Guess how often that happens – not often. Larger companies can invest the time and resources to train anyone and motivate them to keep the customer happy. However, these same companies are offering increasingly complex products just so that they can remain competitive. This in turn results in a huge influx of confused customers calling in to get their questions answered (since, of course, they had no luck on the company’s website or forums). Therefore, organizations waste millions of dollars and thousands of hours answering simple questions - repeatedly. Not only is this inefficient, but it is extremely expensive. That’s why companies, large and small, have added automated customer service to their tier system. Ever find yourself screaming and swearing at an automated virtual receptionist after they misunderstood what you were asking for?

Us neither… But it can be tempting.

The strategy and technology behind those virtual assistant services leaves a lot to be desired right now, but that doesn’t mean something better doesn’t already exist.

By leveraging the api.ai platform, we can help companies automate their customer service through human-like conversations that understand natural language with conversational virtual assistants. This technology understands a customer’s problem and is capable of answering questions as well as resolving issues in real time. Therefore, customer service representatives can focus on more complex problems and other value-adding activities.

Of course, this technology can be deployed across all of your digital channels including text, voice, web, mobile and 3rd party API integrations. Multi-channel support empowers customers to seamlessly receive consistent service regardless of the platform. But it doesn’t stop there! Check out Forbes’ article on how api.ai integrated with Twilio, adds all new insight to communication here.





Interaction Scenario Design and Implementation

Many times it is easier to reach the “Oh!” moment by just watching somebody else do it. For us visual learners out there, this tutorial breaks down the process of creating and implementing a conversation based interaction scenario for a pizza delivery app. Make your app development stand out with a conversational component.

Here are the steps that we cover:

1) Outline an example dialog with a user

2) Populate relevant entities

3) Set up single intent recognition (Single query and response)

4) Use context to support conversation and remember parameters

Whether you’re an iOS or android developer, you should create an app with a voice interface to thrill your users.





A Speech Interface in 3 Steps

Even a capable developer can get fumbled by the components of a full speech interface. Some might think, “Voice recognition sounds good enough,” but that’s just the first step. Voice recognition can fulfill your dictation needs, but when applied to use cases that call for a full voice interface, it’s not only inefficient, it’s the lead cause of users screaming and cursing at their technology. It’s our privilege and sacred duty to set the record straight.

The overall processing flow of a comprehensive speech interface consists of 3 key components:

Speech recognition, speech to text, voice recognition, voice to text and automatic speech recognition (ASR) all refer to the same function: transcribing your spoken words. This first step converts your speech to text so that it can be processed (e.g. dictating an email or text to your mom). Devices need that component before anything else can happen.

(Side note: When working with api.ai you have a couple of different options…)

Option 1) You can use our ASR. This technology is based on statistical and acoustic data from over 2.5 billion processed requests to date. Once our clients have an implementation with api.ai, we can provide custom language models based on domain knowledge. Dynamic language models can also be generated or selected based on conversation and dialog focus.

Option 2) You can use any 3rd party ASR provider.

The second and most overlooked step is leveraging natural language understanding (NLU) technology. NLU cultivates natural interactions between you and a device; rather than being limited to very basic voice commands or needing to know the magic words to make it work. Let’s break it down:

	Intent Recognition: Needed to understand your user’s intent or meaning.

	Context Management: Needed for multi query cases.

	Conversation Management: Needed to support back-and-forth dialogue.

The speech that has been transcribed into text is still foreign to your technology and it won’t know what to do with it. Step 2 takes your text, which is your natural language input, and turns it into a structured object so that your product will understand. In other words, it uses Intent Recognition to understand what the user is asking for.

(Side note: Through advanced techniques in Machine Learning, api.ai can predict variations of what the user might say and still translate the meaning into the appropriate structured object.)

For example, a user could say, “Set thermostat to 72 degrees,” or “Thermostat to 72,” and the words are transcribed and then translated into an easy to rea d – for the device – structured object.

That’s an example of a single query interaction scenario. So what if you want to have some back-and-forth conversation with your device? You’d need context and Context Awareness. A user might say, “Set alarm for 7am,” and their device would respond, “Alarm set for 7am.” When they realize how unpleasant the 7am hour is, they could say, “That’s too early. Set it for an hour later.” And the device would know what their user was asking and respond, “Ok, alarm set for 8am.”

(Side note: api.ai has a sophistocated context managment toolset and engine implementation that supports both internal and external contextual data (e.g. GUI, gestures, diagnostics etc.).)

Conversation Management allows for the engine to seamlessly switch between conversation topics, while remembering what you were talking about (just like you wish your girl/boyfriend would). Users can also have clarifying conversations with your product in the same session / conversation across devices.

Now the device can transcribe your words, translate them into a readable structured object so that it can understand the user’s intent and context, and support back-and-forth dialogue – what’s missing? Fulfillment, implementation, follow through. After your device has a clear understanding of what you’re asking, it may need to take action to fulfill your request. Speech interface without fulfillment would look something like this:

	User: Turn on TV.
	*Nothing happens*
	Device: TV is on.
	User: …

So ask yourself, “Am I looking for transcription technology (speech to text)? Or for a back-and-forth dialogue interface that can deliver on my users’ requests?” Because afterall, who wants to live in a world without fulfillment?

Now you are equipped with an understanding of what a full speech interface requires and why it would crumble without each component. Our only hope is that you will take this knowledge and share it with those who think, “Voice recognition. That sounds good enough to me.”





Onboarding Users in the 21st Century

You spend months developing a great app for users to fall in love with. You went the extra mile and made a Getting Started Tutorial to help onboard users that goes beyond the typical UX design. But then those overly anxious users skip the helpful tips, get lost in the options, and then complain that the app was too difficult to use or didn’t work properly.

When it comes to new, exciting technology, who has the patience to read through a tutorial? Our instincts are to explore first and ask later. So why not onboard new users with a they-ask-a-question-they-get-an-answer interface? Here’s what onboarding new users with a full voice interface looks like:

1) Person asks app how to do X. We turn that into actionable data in the form of a JSON.

	resolvedQuery: “How do I create a new task”
	action: task.create.help
	parameters: { helpTopic: “new tasks” }

2) App tells person how to do X

3) Person wants to know more, they ask.

Imagine: your user turns on their new device, they are greeted by a virtual assistant that allows for an interactive question and answer learning session. They ask just enough to get started and then they’re off – enjoying your creation! If they get stuck later on, that virtual assistant is always there to answer with how-to’s and tips.

The assistant can also be updated anytime to include more answers for updates and changes to the app or device. That way you never lose a user when adding new functionality or updating UX interfaces.

What’s the best part? You don’t have to imagine – the technology for an intuitive, affordable speech interface, for any app, device, or website, is here at api.ai. Raise your user acquisition and customer loyalty today.





User Defined Entities

As an API.AI super user, you already understand the power behind the platform. Your implementation is reaching its final stage, and you have started to ask for more advanced capabilities. Most frequently, developers just like you are interested in being able to customize entities on the user/request level.

Let’s take a look at a quick scenario. You are a developer with an application to control all of the devices and settings in your smart home. Out of the box a user can refer to their objects in generic ways like “turn the lights on in my office”. But - what if the user wants a more personalized experience? If this is the case, the end user must be able to add their own names to refer to specific objects. For example, rather than referring to a generic office entity, they could say “turn on the lights in my mancave”.

In order to work with user entities:

1) Create an entity (e.g. @houselocation and @device entities for a smart home app) Populate the entity with default values (e.g. @houselocation has:Kitchen, Living Room, Bedroom and @object has Lights)

2) Create an intent that uses this entity (e.g. “Turn on the @device in the @houselocation”)

3) Utilize the new entities parameter in the query endpoint. The entities object accepts an array of entities, following the format used here.

The submitted entities will replace entities defined in the developer console just for this one request.

Here’s the curl example:

The specified entity needs to exist in the developer console. The UI does not allow for creating an empty entity, so you must have some default value there.

On the Android SDK level: To use the SDK please use the directive below instead of pulling it from the repo:

Usage example:

Finer grain controls for the entities are coming soon.