Amazon is making Alexa smarter at answering questions and guessing what you may ask next. Amazon has armed Alexa with a new machine learning system that “can infer that an initial question implies a subsequent request”. Explaining the new system, Amazon said, for instance, if a user asks, “How long does it take to steep tea?” with the new capability, Alexa might answer that question, “Five minutes is a good place to start”, then follow up by asking, “Would you like me to set a timer for five minutes?”
But it would be interesting to see how Alexa actually becomes smarter at predicting what users actually want. This is important because the new system can get irritating after some point. For example, if you ask, “Hey Alexa, how much time does it take to bake a cup cake” and for everytime you ask a similar question you would not want Alexa to infer a reply like, “Would you like me to set a timer?”.
Keeping this in mind, Amazon claims that to determine whether to suggest a latent goal, it uses a deep-learning-based trigger model that factors in several aspects of the dialogue context, such as the text of the customer’s current session with Alexa and whether the user has engaged with Alexa’s multi-skill suggestions in the past.
At least Amazon is aware that when you ask Alexa for “recipes for chicken” you would not want a follow question from Alexa like “Do you want me to play chicken sounds?”
In order to make predictions meaningful, Amazon uses semantic-role labeling model looks for named entities and other arguments from the current conversation and bandit learning, in which machine learning models track whether recommendations are helping or not. This new feature is already available to Alexa customers in English in the United States.