Chatbots and voice assistants: Often overused, ineffective, and annoying

While it’s cute to talk to your computer, chatbots provide little value to many processes, yet they get an inordinate amount of attention from providers

Chatbots and voice assistants: often overused, ineffective, and annoying
geralt (CC0)

There’s yet another cloud service from AWS: Amazon Lex, which lets developers build conversational interfaces into applications for voice and text. It uses the same deep learning technologies that power Amazon's Alexa voice assistant.

Lex lets you quickly build natural language conversational bots, aka chatbots. Microsoft has a similar technology, called the Microsoft Bot Framework. This seems to be a common service that most public cloud providers are looking to provide, not to mention many third parties that offer chatbot technology as well.

But the question is not if we can have a voice conversation with our applications—we clearly can—but if we should have a conversation with our applications?

Natural language processing has been around for some time, but only recently has it gotten practical. Still, it’s not perfect.

Most of us have been frustrated with misunderstandings as the computer tries to take something as imprecise as your voice and make sense of what you actually mean. Even with the best speech processing, no chatbots are at 100-percent recognition, much less 100-percent comprehension.

It seems very inefficient to resort to imprecise systems when we have more precise ones available. Even if they were 100-percent accurate in their recognition and comprehension, why use voice? if things need to talk to each other, let them talk using direct digital mechanisms, which are way more accurate than me talking to a machine.

One aspect of cloud computing is to automate things that have yet to be automated, removing people from the system in many cases. In other words, let the machines chat at 100-percent accuracy rather than have me talk to a chatbot. The ultimate objective of much of the AI-driven automation is to remove people from the process where possible, because human interaction causes latency, inaccuracy, and avoidable errors.

Plus, if I want or need to talk to my computer—that is, where a person truly needs to be part of the dialog—I have a keyboard and a mouse to do so, and when using them I won’t bother anyone else in the room. You can’t say the same for voice systems. Nor do you have to worry about a voice system responding out of line because it hears something not meant for it—a common issue with Siri, Cortana, Google Now, Alexa, and their ilk.

A lot of the focus on voice assistants and chatbots seems to be because we can, not because we should. Let’s stop trying to fit every peg into the same type of hole.

Copyright © 2017 IDG Communications, Inc.