Voice Enabling TaskList App with api.ai

To help you get started in voice enabling your apps for Android, we have put together a basic implementation walkthrough. For this demonstration, we have decided to start with an open source to do list app and turn it into a smart task list that people can easily interact with on the go.

Link to android SDK: HERE

Playstore link to app we will voice enable: HERE

Link to git for TaskList open source code: HERE





Domains Release: Don’t Reinvent the Wheel

If you are looking for quick, out-of-the-box speech to text functionality – then we have news for you! At api.ai, we understand that sometimes you don’t want to create everything from scratch and we have the full voice interface solution.

Through the half a decade that we spent creating our nlp engine, we have also created many significant knowledge domains to support interaction within the Assistant. For your convenience, we are releasing these Domain Knowledge Bases into api.ai

Domains are pre-defined knowledge packages. By enabling Domains on the appropriately named tab in the dev console, you can make your agent understand thousands of diverse requests – no coding or thinking required! This week we have released Domains for: Smart Home, Maps and Points of Interest, Booking, Media, Times and Dates, Web Browsing, Small Talk, Apps, and Device Control. Of course, more will be coming soon.

So, what does that mean? Now, user’s requests are sent for processing to both your agent and the Domains Knowledge Base. For ease of development, you will see both results in the api.ai test console. In the runtime, your agent has a preference. If a request is made via the HTTP API or one of the SDK helpers, api.ai will return the agent’s response if available. If not, a Domain response will be returned. If your app requires more commands to be processed within certain Domains, you can create your own intents in addition to what is available. [Note: You need to use the same name for the parameters to be consistent with the parameters returned from the Domains].

This means that you could create an empty agent, and as long as Domains are turned on it will understand all sorts of things (e.g. “I want to hear the Red Hot Chili Peppers”, “Lock the front door at 9 pm”, “Book me a hotel for 2 nights in San Francisco”, “Wake me up at 7:30 am”, “I want a hot dog”, “I want to go to a concert tomorrow”).

For more information, check out our Domains documentation here.





Why not interact with advertisements?

Do you remember the “olden days” when we had to watch TV commercials? We zoned out during irrelevant radio promos, muted Spotify blurbs, and desperately pressed “skip” to YouTube ads. Oh wait – it’s still that way. Digital advertising is a multi-billion dollar industry and yet it is in need of some top-notch innovation. They really haven’t had many options, until now…

With the capabilities of speech to text growing, modern advertisements are on the brink of a revolution. It will leverage both big data and cutting edge technology to provide a personalized ad experience that was previously unimaginable. Paid ads and freeads alike can now be updated dynamically to reflect your preferences and interact with you as if you were, well, a human.

Let’s look at a few use cases through something like Spotify.

Ad: “Hi Mike! I noticed that your mom’s birthday is around the corner.  She might enjoy a dozen hot pink roses.  Would you like me to send her some?”

Mike: “Oh, I forgot.  Please send her two dozen white roses.”

Ad: “Sure thing.  $62 will be charged to your credit card on file.”

Ad: “The summer is creeping up and I have some hot deals specifically for you, Mike.  Would you like to hear more?”

Mike: “Are there any flight to Hawaii deals?”

Ad: “Let me look. Yes, I have a deal for a flight to Maui during the first week of June.  Would you like to book it now or should I send you an email with the details?”
Mike: Send me an email.

Ad: “Hi Mike, Geico can save you 15% in under 15 minutes. If you would like to hear more about how to save on your auto insurance by switching to Geico, just ask”

(Later that day)

Mike: “Can you tell me more about the Geico deal from earlier?”

Ad: “Sure…Let me connect you to a Geico representative to find a plan that’s best for you”




Going with mandatory versioning soon

Greeting API.AI Developers!

We are announcing that starting June 1, 2015, the versioning v parameter will be mandatory in all api.ai calls. Introduction of the versioning support will allow us to add new functions and make improvements to api.ai that are not backwards compatible. We are using the approach proposed in Foursquare API.

Couple of examples of such changes:

  • Domains functionality that you might already have seen in the console
  • Enhancing the return formats for system entities. E.g. @sys.date entity will be handling dates both in the future and in the past.

To use it, include v=YYYYMMDD parameter when you make a request. For example, https://api.api.ai/v1/query?v=20150330&query=weather&lang=en&sessionId=123

Check out our HTTP API documentation for more details. If you have any questions, always feel free to drop us a line.

Happy coding!
Team API.AI





Create Intents with System Entities

For your convenience, we have included pre-defined system entities. We take care of common entities so that you don’t have to. System entities include things such as countries, capitals, music artists/genres, times, dates, numbers, names, colors and more (you can find a list in the entity overview). When you use the system entities, there is variability included for incoming values (e.g. USA can be referred to as the States or the United States of America).

A simple example is found below, where we are including a few system entities for an intent to travel.

To make this intent even more robust, we can make some more improvements by using inline entities:

Here, we are defining an inline entity that includes a broad set of US destinations that you can refer to. We do this by using an inline entity format: “@{… , , ….}:alias”. The intent defines two alisases - "place" and "date" - that will be returned to you in the JSON object. You can try it out by entering this example and saying something like: “I want to visit the Statue of Liberty on Jan 3rd”.

Similarly, inline entity syntax is used to define a number of synonyms: “@{want to, wanna, have to}”. Because there was no alias added at the end of this inline entity, nothing will be returned in the JSON. This approach can be used to easily add multiple variations in one intent and reduce the number of user expressions that you need to provide.

If you find that a system entity is missing - let us know and we will fix it!