Skip to main content

Amazon Alexa head scientist on developing trustworthy AI systems

Amazon Alexa Echo Dot 2019
Image Credit: Amazon

Particularly over the past half-century, humans have had to adapt to profound technological changes like the internet, smartphones, and personal computers. In most cases, adapting to the technology has made sense — we live in a far more globalized world compared with 50 years ago. But there’s a difference when it comes to AI and machine learning technologies. Because they can learn about people and conform to their needs, the onus is on AI adapting to users rather than the other way around — at least in theory.

Rohit Prasad, head scientist at Amazon’s Alexa division, believes that the industry is at an inflection point. Moving forward, it must ensure that AI learns about users in the same ways users learn so that a level of trust is maintained, he told VentureBeat in a recent phone interview.

One of the ways Amazon’s Alexa team hopes to inject AI with greater trust, and personalization,  is by incorporating contextual awareness, like the individual preferences of Alexa users in a household or business. Starting later this year, users will be able to “teach” Alexa things like their dietary preferences — so that Alexa only suggests vegetarian restaurants and recipes, for example — by applying this information to future interactions.

“Alexa will set the expectation about where this preference information will be used and be very transparent about what it learns and reuses, helping to build tighter trust with the customer,” Prasad said. “These are the benefits to this.”

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

Toxicity and privacy

Fostering trust hasn’t always been the Alexa team’s strong suit. In 2019, Amazon launched Alexa Answers, a service that allows any Amazon customer to submit responses to unanswered questions. Amazon gave assurances that submissions would be policed through a combination of automatic and manual review, but VentureBeat’s analyses revealed that untrue, misleading, and offensive questions and answers were served to millions of Alexa users. In April 2019, Bloomberg revealed that Amazon employs contract workers to annotate thousands of hours of audio from sometimes accidentally activated Alexa devices, prompting the company to roll out user-facing tools that quickly delete cloud-stored data. And researchers have claimed that Amazon runs afoul of its own developer rules regarding location privacy on Alexa devices.

In response to questions about Alexa Answers, Prasad said that Amazon has “a lot of work [to do]” on guardrails and ranking the answers to questions while filtering information that might be insensitive to a user. “We know that [Alexa devices] are often in a home setting or communal setting, where you can have different age groups of people with different ethnicities, and we have to be respectful of that,” he said.

Despite the missteps, Alexa has seen increased adoption in the enterprise over the past year, particularly in hospitality and elder care centers, Prasad says. He asserts that one of the reasons is Alexa’s ability to internally route requests to the right app, a capability that’s enabled by machine learning.

The enterprise has experienced an uptick in voice technology adoption during the pandemic. In a recent survey of 500 IT and business decision-makers in the U.S., France, Germany, and the U.K., 28% of respondents said they were using voice technologies, and 84% expect to be using them in the next year.

“[Alexa’s ability] to decide the best experience [is] being extended to the enterprise, and I would say is a great differentiator, because you can have many different ways of building an experience by many different enterprises and individual developers,” Prasad said. “Alexa has to make seamless requests, which is a very important problem we’re solving.”

Mitigating bias

Another important — albeit intractable — problem Prasad aims to tackle is inclusive design. While natural language models are the building blocks of services including Alexa, growing evidence shows that these models risk reinforcing undesirable stereotypes. Detoxification has been proposed as a fix for this problem, but the coauthors of newer research suggest even this technique can amplify rather than mitigate biases.

The increasing attention on language biases comes as some within the AI community call for greater consideration of the effects of social hierarchies like racism. In a paper published last June, Microsoft researchers advocated for a closer examination and exploration of the relationships between language, power, and prejudice in their work. The paper also concluded that the research field generally lacks clear descriptions of bias and fails to explain how, why, and to whom specific bias is harmful.

On the accessibility side, Prasad points to Alexa’s support for text messages, which lets users type messages rather than talk to Alexa. Beyond this, he says that the Alexa team is investigating “many” different ways Alexa might better understand different kinds of speech patterns.

“[Fairness issues] become very individualized. For instance, if you have a soft voice, independent of your gender or age group, you may struggle to get Alexa to wake up for you,” Prasad said. “This is where more adaptive thresholding can help, for example.”

Prasad also says that the team has worked to remove biases in Alexa’s knowledge graphs, or the databases that furnish Alexa with facts about people, places, and things. These knowledge graphs, which are automatically created, could reinforce biases in the data they contain like “nurses are women” and “wrestlers are men.”

“It’s early work, but we’ve worked incredibly hard to reduce those biases,” Prasad said.

Prasad believes that tackling these challenges will ultimately lead to “the Holy Grail” in AI: a system that understands how to handle all requests appropriately without manual modeling or human supervision. Such a system would be more robust to variability, he says, and enable users to teach it to perform new skills without the need for arduous engineering.

“[With Alexa,] we’re taking a very pragmatic approach to generalized intelligence,” he said. “The biggest challenge to me as an AI researcher is building systems that perform well but that can also be democratized such that anyone can build a great experience for their applications.”

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.