Artificial intelligence technology isn't where we thought it would be. Fact is, these things take time. That's just the way it is.

Dan O’Connell, Chief Revenue Officer, Dialpad

May 27, 2022

6 Min Read
abstract of a women speaking into a voice translator
lassedesignen via Adobe Stock

I can still remember the first time I used Alexa.

It’s crystal clear in my head: I said, “Alexa, play Take On Me and a few seconds later A-ha’s synthy drums kicked in. It was one of the few times in my life that artificial intelligence has left me speechless.

It was a real-world version of the technology I grew up seeing in countless science fiction movies and TV shows. Giving that command, I felt like Captain Kirk talking to the computers on the USS Enterprise.

This was in 2013. I recall wondering where the technology would be in another five or 10 years -- which is roughly where we are right now. I imagined myself having full-blown conversations with personal AI assistants and giving complex voice instructions to my computer. That all seemed achievable, even probable. After all, technology advances exponentially.

With the benefit of hindsight, I can see that I was too optimistic. We’re still a long way from bona fide human-to-AI conversation.

Human imaginations always outpace technology. What I can imagine in a minute takes a decade to become something tangible. Left unchecked, our perceptions race away from facts. Every so often, we have to recalibrate our expectations.

We need to swap out our science fiction dreams for technological fact.

How Advanced is AI -- Really?

For as long as I can remember, people have claimed that fully self-driving cars are just over the horizon. Tesla, Toyota, General Motors, and Google all promised us self-driving cars by the end of 2020, but we’re still waiting. The technology seems to always be just out of reach.

It’s the same in most other industries.

Take cloud communication. People have long dreamed of autonomous AI agents that handle the bulk of contact center communication. Some have even promised they’re on the way. But like building an autonomous car, crafting an artificial agent is a big challenge. I have no doubt that we can get there, just that it will take more time than expected.

Think about two small parts: speech recognition (transcribing speech into text) and natural language processing (understanding text and spoken word).

Today, technology transcribes calls instantaneously, with far better accuracy than I could manage if I had to become a stenographer for a day. And, natural language processing technology for enterprises is good, too. It can analyze transcripts and provide some basic understanding of topics, questions, sentiment, action items, and so on.

But what AI can’t do just yet is understand what a conversation is actually about. Systems can transcribe a conversation about puppies. It can pull out questions about breeds and highlight an unanswered question about Labrador veterinary care. But it doesn’t know what a Labrador is or what a flea treatment entails. It doesn’t even know what a dog is. Is that kind of tragic, and a little creepy? Sure, but it’s also true.

Today’s AI systems are great for simple, repeatable functions. Because they perform those functions so well, they can give a false impression of its potential. The leap from simple function to fully autonomous agent or self-driving car is a chasm. I feel confident saying that we won’t see a fully autonomous smart agent replacing a human agent in the next five to ten years.

There’s a gap between what we believe AI can do and what it’s capable of in the real world. It’s up to companies to fix the discrepancy. Because if we let rumor run wild, it’ll undermine all the breakthroughs we have made.

Let’s Recalibrate Our Perception of AI

It’s tempting to tweak the truth and embellish functionality, especially when it comes to something as opaque as AI. But a lot of companies do just that. According to venture firm MMC, four in 10 European startups classified as AI companies don’t use AI technology in a way that’s “material” to their business. In a lot of cases, their AI powers things like chatbots or fraud prevention. Both are useful applications, but they’re “more of an auxiliary service than a central selling point.”

Small embellishments or overpromises probably help in the short term. A company can generate media buzz, win over some customers, and pad its bottom line. But after people start using their product, those small wins turn into big losses.

When you overpromise and underdeliver, people get frustrated. They complain. They cancel. They bad-mouth your company to their network. I know that’s true because I’ve been that consumer.

In the mid-1990s, I was captivated by an ad for a speech-to-text program. They promised the whole science fiction experience: speaking out loud, giving voice commands, and perfect transcription. It sounded amazing, so I downloaded the program and spent 60 hours training it on my voice. Prep work done, I sat down to narrate a college essay.

Let’s just say it failed to live up to any semblance of expectations.

It missed commands, transcribed poorly, and was far more frustrating than just writing my college papers with a pen, paper, and Bic Wite-Out. It was all hype and no substance. I ditched the tool and never came back. It’s only now, decades later -- and with the development of personal assistants -- that I’m finally coming back to voice commands.

Basic Rules to Follow

Here’s the wild part: There’s no regulation around this whatsoever. Companies have to read cautionary tales like this and decide to regulate themselves. For those leaders and organizations willing to hold themselves accountable, there are some basic rules.

First, businesses should be upfront about how they source their training data. Companies like Google and Facebook have rightly caught flack for being cagey around their data gathering methods. Where does it come from? Is it representative? How do you manipulate it after collection?

If you’re an AI practitioner or you’re part of the go-to-market team for an AI product, you need to be open. There’s nothing sensitive you can share. What happens when you tell your competitors how you find your data? Nothing. Owning the data is the important bit, not your data gathering process.

Second, be clear about how you’re using that data. Data is the lifeblood of AI systems. It’s what makes them work, so there’s no sidestepping the question. When you’re upfront, people are usually happy to opt into sharing their anonymized data to a collective pool, especially when you tell them it’s to help improve the product.

Last, describe your AI products accurately and honestly. Be upfront about what you can do and, when it’s appropriate, what you can’t. You might lose an inch to your competitors in the short term, but ethical companies stand to win out in the long term. They’ll retain happy customers, enjoy sustainable growth, and blow past organizations playing fast and loose with the truth.

The human imagination is a brilliant thing. But we can’t let it rewrite our technological reality. By all means, imagine, daydream, and ponder. Think up dozens of new AI applications and products. Use those ideas to fuel your work.

But don’t let your ideas write checks your technology can’t cash.

About the Author(s)

Dan O’Connell

Chief Revenue Officer, Dialpad

Dan O'Connell is the Chief Revenue Officer at Dialpad. Previously, he was the CEO of TalkIQ, a real-time speech recognition and natural language processing start-up that Dialpad acquired in May of 2018. Prior to TalkIQ, he held various sales leadership positions at AdRoll and Google.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights