“I saw the angel in the marble and carved until I set him free.”

There is no proof that Michelangelo said this, but it is a beautiful metaphor apropos to building conversational AI (chatbots and voice self-service) applications with generative AI and large language models (LLMs).

At the recent Five9 CX Summit event, I learned a lot from Five9 CTO Jonathan Rosenberg’s presentation on how LLMs and generative AI will change contact centers. One thing he said that stuck with me was his description of the difference between building chatbots and self-service applications with traditional machine learning AI vs. using generative AI and LLMs to create the same solutions.

It’s the difference between building and carving.

With traditional tools, you build everything. You need to define every question, how it will be said, and anticipate all the conversational flows. While there are many new code toolsets provided to build these solutions, it’s still programming. Developers need to identify what customers are going to ask, all the ways that they are going to ask the questions, and then build out explicit handling for each “intent” that they want to support. Vendors have done a great job extending their tools and prebuilding applications for specific use cases; while this reduces the effort to deploy the applications, deployment and management costs are still high enough to keep many brands from building more than a rudimentary conversational AI application for customer self-service.

Contrast this with generative AI and LLM-driven deployments. The system already processes language well enough that it can recognize most intents out of the box, and as ChatGPT is teaching us all, it knows how to talk pretty well. We are no longer building; we are using prompts to sculpt away the unnecessary, to focus on the conversation we want the system to have. It’s much easier to focus than it is to build.

Conversational AI solutions have a significant role to play to connect generative AI and conversational AI. Every call to a generative AI system is expensive (the costs are plummeting, but it’s still not cheap). Getting account-specific data or transactional information from a CRM or other back-end system is not something genAI is made for. Conversational AI vendors see it as their job to provide the additional functionality to make these applications truly valuable, and to build the guardrails that allow brands to provide their customers with generative AI self-service without fear of a class action lawsuit-worthy hallucination.

There is a lot to learn here, and many new domains to master. Prompt engineering, the art of asking the question of a generative AI system to get the answer you want and to avoid hallucinations, is a new discipline that is evolving quickly. Under the covers, there are tokens, vectors, and LLM chains to build. There is a lot of work to do here and many ways to put these pieces together to deliver excellent customer self-service applications.

As an analyst, I get to talk to many conversational AI vendors to see what they are doing to take advantage of the promise of generative AI and LLMs for customer self-service. This is a big pivot. So many of the ways that vendors have differentiated themselves in the past are far less important than they were in a pre-generative-AI world. Great tools to identify unmanaged intents, fast and easy ways to train for specific intents, and prebuilt modules to cover the most common conversations are not going to set vendors apart much longer.

I’ve not spoken with a single conversational AI vendor that is in denial about the impact of generative AI on their solutions. This is a highly competitive market with many vendors, and the race is on to embrace generative AI, so vendors are taking a variety of approaches. There are some common elements in what I am seeing from the vendors, including the following:

  • Making prompt engineering fit within their no-code environments. I’ve seen some simple things, such as turning comments fields within their development toolsets into prompt engineering windows. These approaches can work, and let folks get to market quickly.
  • Rethinking development approaches to work in the generative AI world. The development tools will evolve from clever extensions of what is available today to tools that are purpose-built to allow nondevelopers to build generative self-service applications.
  • Becoming the orchestration engine between different LLMs or generative AI systems to give brands control over the resources they use.
  • Managing the application flow and data sharing between a generative AI system and the engine responsible for back-end integration and gathering of data.

There are even some generative-AI-based customer self-service applications in production already. Nothing I’ve seen has been life-changing to date, but the effort is valiant, and you can start to see the promise in these applications.

We are moving to a world where deployment of self-service applications is easier and cheaper than we ever imagined, and the scope of what these applications can do will extend far beyond what we are used to seeing.

It’s the difference between building and carving.