Americas

  • United States

Asia

lucas_mearian
Senior Reporter

Q&A: ServiceNow CIO sees an ‘iPhone moment’ for genAI

feature
Nov 07, 202317 mins
Artificial IntelligenceAugmented RealityCareers

Chris Bedi has been involved in a full-court press to take advantage of the productivity, efficiency, and efficacy capabilities of artificial intelligence for years. But when ChatGPT launched a year ago, everything changed.

shutterstock 2261130659
Credit: Shutterstock/iQoncept

Like many enterprises, ServiceNow has been incorporating artificial intelligence (AI) into its internal systems and customer-facing products for years. But when Open AI’s ChatGPT emerged a year ago, everything changed — fast. 

Suddenly, what had been machine learning — or “analytical AI” that could produce recommendations based on financial, sales, and marketing data —  became natural-language processing. A brand new employee could suddently ask the corporate generative AI (genAI) application for an answer to an in-depth client question. Seasoned employees could ask the platform for information about company benefits or how to get a new laptop.

Chris Bedi joined ServiceNow in September 2015 and serves as the company’s chief digital information officer. Prior to joining ServiceNow, he spent almost four years as CIO of JDS Uniphase Corp. (JDSU), where he was responsible for IT, facilities, and indirect procurement. Before that, Bedi held various positions at VeriSign between 2002 and 2011, including CIO, vice president of corporate development, and vice president of human resource operations.

When he joined ServiceNow, the company was earning about $800 million a year in revenue. Today, its revenue tops $8 billion, and it employs about 22,000 employees. Bedi has also gone all-in on AI.

ServiceNow is now implementing genAI through an internal pilot program. Leveraging its own platform and third-party LLMs, the company has gone live with 15 genAI pilots across multiple departments, including customer service, IT, HR, and sales.

Those trials are focused on driving better customer and employee experiences with higher self service, agent productivity, automated marketing lead management, and text-to-code software development.

chris bedi ServiceNow

ServiceNow’s CIO Chris Bedi

Bedi recently spoke with Computerworld and explained why he sees the introduction of ChatGPT and genAI as a watershed moment for enterprises, and why he worries less about what could go wrong and more about whether he’s creating an environment where the technology can advance as fast as its capabilities enable it. 

The following are excerpts from that interview.

When did your company begin using AI on any level? “I joined September 2015, and I remember meeting with our machine learning team as part of my onboarding. So, we’ve been doing machine-learning applications as early as 2015. As you can imagine in 2015, a lot of this was a bit more pilot, science projects.

“Over the years, we’ve scaled it tremendously. The industry hasn’t really settled on a term. What do we call the AI that existed before genAI? I just call it analytical AI. If you think about it, it’s infusing machine learning into all of our important ranking, rating, or recommendation [engines] on where revenue is going to end up, the possibility that a sales deal is going to close, the likelihood that we could have a customer doing this. We’ve been doing this for a long time.

“And, we’ve been building a lot of AI into the ServiceNow platform, as well — whether it’s MLU, NLP, machine-learning-based calculation  mechanisms, risk ratings, routing of customer cases, etc. Even before GenAI, we were using ML to automatically assess processes to find bottlenecks and inefficiencies. Once ChatGPT came out, we very quickly started moving to experiments, and now experiments are in production.

“Most leading CIOs did something similar. If I call it analytical AI, they’ve been at it for a while. What genAI and ChatGPT has done is it has kind of woken up the rest of the C-suite to this whole AI thing that most leading CIOs have been on for a while.”

Can you give me a definition for analytical AI? “I struggle with it, too. We were doing AI before ChatGPT came along, but we were calling it supervised machine learning, unsupervised machine learning, natural language understanding, natural language query, natural language processing. All those names were under the category of AI. Now we have genAI, so do we call it traditional AI? Do we call it analytical AI because most of it has to do with numbers. So, when I say analytical AI, it’s a placeholder term for all the AI that came before ChatGPT came along. So, again NLU, NLP, NLQ, virtual agents, process mining, RTA…, a basket of stuff.”

What changed in November 2022 when ChatGPT was released? “I think what changed in November, and I ask myself this, is this a metaverse moment where we’re all enamored with the tech? Or is this blockchain where we’re all searching for use cases? Or is this more of an ‘iPhone moment,’ which is really going to change almost everything we know.

“I feel like this is more like the iPhone moment. What genAI allowed us to do — with large language models the underpinning of AI — what defines a language, I think that’s where it’s up to us to reimagine. Coding we’ve defined as a language now. Obviously, there’s text-based, whether it’s resumes or summaries of security incidents, all of that is a language. We’ve been doing things on the NLP [natural language processing] side with products for a few years.

“So, we’ve been partnering with Hugging Face and StarCoder in developing large language models on the product side of the house. And, obviously I serve as customer zero for all of our products, which is why we’ve been able to release products so quickly. I think we were one of the first to offer real working [AI-infused] products to the marketplace.”

Can you offer up some of the top use cases for genAI at ServiceNow — both internally and externally? “Right now, we have 15 use cases that are live and using genAI. I’d simplify those into four general use cases. One is around customer/employee experience. The industry has been after case deflection, employee self-service for a while now. That’s not a new concept. I think what genAI did is it gives that function a step up in terms of effectiveness.

“If you think about searching for an answer on any customer support site, including our own now, you’ll now get a genAI response. That’s the equivalent of a search engine response where you’re getting a bunch of links and then a genAI search response where you’re getting the information you’re actually looking for.

“Even in 10 weeks, we’ve seen a 3% to 4% jump in case deflection rates on our customer support site. We’re seeing very similar results on employee self-service. As employees need to know, ‘Who’s my benefit provider? How do I get a new laptop? How do I get a new travel card? Is this thing worthy of a press release?’

“All those answers that are buried in…corporate policies and documents — genAI makes them instantly accessible. So, customer and employee service is number one.

“The second use case is…agent productivity. An agent could be an IT agent, HR operations, customer support, or someone in finance. How do you help an agent be more productive by analyzing large sets of information quickly, summarizing it for the agent so they can get to the heart of the answer quickly?

“On the flipside of that, as they’re managing their work and handing it off from one person to another or resolving it, you need to send a nice summary to the customer; genAI can write that summary for them. And we’ve actually seen with what we’ve deployed, 70% of agents are accepting those genAI summaries with minimal edits. We actually measure minimal.

“We’ve seen an increase in cases solved for agents per week. We’ve seen shortened durations of the time it takes to resolve cases. So any way you slice it, the productivity boost is starting and we’re in the early innings. I know we’ll get better.

“One other measure we’re looking at, which I think is really important, is the sentiment of the people using genAI. I’ll take my own shop that’s using AI for its IT agents, I think 56% have already said… this thing is a boost to their productivity. That sentiment is hard to get to because we’re so wedded to our current ways of working.

“The third use case is accelerating digital transformation. So, text-to-code is real. Text to workflow is real. What we’ve seen is a 26% acceptance rate on our software developers accepting what genAI is providing them on a text-to-code standpoint. For people outside the industry, 26% may seem small, but as a practitioner, I’m really pleased with that number. If you pull on that thread, that’s 26% more lines of code that can be written without a human having to do it. Pull on the thread a little more and you’ve got a 26% productivity bump in an area that’s one of the scarcest talent areas regardless of the industry — software engineers. And it’s only going to get better.

“I would say there’s work to do on sentiment and adoption. People have been working a certain way for decades. GenAI is very new and, like with anything new, it’s going to take some time for the adoption rate to climb up to the point where it’s like you or I listening to Spotify for music or something like that.

“When genAI works, I think the adoption rate will be a bit slower than leaders like myself want. That’s the standard recipe of change management, and training and skillset development on all those things.

“The fourth use case is around…how to help a human become an instant expert. If you think about us as a high-growth software company, we want to serve our customers in the best way. We have lots of innovation coming out of our platform every month; keeping up with that in the interest of serving our customers is pretty hard. I think about a new person joining ServiceNow. If you remember the movie “The Matrix” when he plugs that thing into the back of his head and he instantly learns how to fly a helicopter — that’s the vision I have.

“So, we put all our product documentation, from high-level value messaging to low-level product expectations — how do you configure this — and took every RFP response, every sales presentation and sales training, indexed it in large language models. So now, if someone joins ServiceNow on a Monday, on Tuesday morning someone from [our customer] FedEx asks how do your solutions help out with operational technology risk management, that new employee can quickly go to a portal and type in the question and will get a very intelligent response. Think about that context of ‘instant expert,’ it can apply to any persona. We just happened to apply it to the one that serves the customer first, but we’re going to roll that out across the organization.”

Explain what the term ‘customer zero’ means in terms of rolling out new technology? Are you saying you roll it out for internal operations first? “Two things: I’m talking about deploying a lot of this for internal and customer-facing use cases, but as a software provider we’re offering a lot of genAI products to the marketplace. When I say customer zero, I mean we use all that same technology to power our own business. We use it to scale ourselves, and…the productivity gains we get from using our own platform are self evident. So, by customer zero, I mean we use every one of our products internally, prove out the value before our customer use them to make sure we’re confident on the technical side and the business side, change management, etc.”

What feedback are you hearing from your peers and customers about the current landscape for IT decision makers when it comes to genAI? “Because ServiceNow serves about 90% of the Fortune 500, I have the privilege of talking to a number of Fortune 500 CIOs every week. I’d say there are three camps:

  • One camp is saying, ‘We have to spend the next three or four months figuring out governance, figuring out security, figuring out all the underpinnings in a super-solid architecture, before I start doing pilots and putting stuff into production.’
  • A second camps is saying, ‘Yep, the tech works, but I need to see a real ROI before I invest material human capital or dollars into this.’
  • A third camp believes this is inevitable. ‘We need to just get on with it. We don’t measure the ROI [on an established technology like] email, and genAI in the workplace will be as common as that. So, the faster we get moving, the better.’

“Again, none of the viewpoints are wrong. Everybody is probably doing some of each, but the common underpinning is everyone is doing something around genAI. Most CIOs I’m talking to have been asked by their CEO or C-suit to have a genAI strategy for each department.”

When it comes to corporate policies, standards and oversight, what have you instituted to ensure the safe and ethical use of genAI, especially in light of President Biden’s recent executive order“With President Biden’s new rules, I think that’s a great step in the right direction. We have a commitment to the responsible and ethical use of AI models.

“So, we’re supportive of the new executive order. But we also have an AI ethics and governance council…to protect our employees and customers from bias, cybersecurity vulnerability, data privacy risks, and even the user experience of transparency. If someone is getting a genAI answer to their question, we want them to know this was generated by a machine.

“We are very focused on it. We have an AI governance and ethics committee, which is cross-functional in nature. It’s legal, it’s my organization, it’s our product organization, it’s cyber. We test all that out before releasing anything to the market or our own employees.”

Have you been concerned about baked-in biases that have been evident in some genAI platforms, such as automated hiring assistant applications? “For sure. Bias and hallucinations. Hallucinations are real. How do you monitor for hallucinations? If you take my instant expert example, how do you make sure the models are good enough so that you don’t lead someone down the wrong path.”If you think back years ago, remember when Apple Maps first came out and was directing people to drive into lakes?

“How do we make sure genAI isn’t interpreting accurate content ,but when it puts it together it becomes inaccurate. So, I think those are real issues which the industry is still unpacking. It also comes back to your level of sophistication and being able to measure hallucination rates and only displaying an answer when the confidence level is above ‘X’. There are ways to do that. It’s a story that’s yet to unfold as to the final answer, but yes I do worry about it.”

At a high level, how do you deal with establishing ‘X’?, i.e., an acceptable level of AI accuracy? “We keep stuff in the lab until we’re confident in the level of hallucination rates. Then, there’s also no substitute for a human in the loop who can say, ‘This isn’t right.’ We’re still smarter than the machines.”

How are you getting your own people up to speed on AI development internally and with skills like prompt engineering? “When we met together in June, we laid out rules, objectives, and training. Here’s this new kind of training. For my own organization, we laid out an AI training path. There’s an AI 101 and 201. We have to do some work to curate that and it’s a combination of training we developed from publicly available sources. 

“The key is making sure the talent in the organization has a path to learning AI, and this isn’t something being done to them. This is a path to a journey. And, actually, we’ve broadened that to the whole company this past quarter. We’re going to hold an AI learning day for all 20,000-plus ServiceNow employees to get very familiar with AI, because we’re all going to be working alongside AI.

“In the future, we’ll be taking that down to a department and persona. As we craft our AI strategies, we have to marry that up with what this means for the human who’s now going to be working with genAI or traditional AI, and where maybe AI is now doing x-percent of their job, which can be discomforting; but that’s our job as leader: to bring the workforce along and give them the talent, tools and training they need to be successful in an AI-centric world.”

AI is probably going to eliminate some tasks and, in some cases, jobs. Do you see AI having an impact on employee headcount? “Not really. We’ve had automation technologies for a long, long time. Go back to Excel and think about all the work analysts had to do before it. I haven’t seen anything showing there are fewer analysts in the workplace today. I think it’s going to allow people to do more interesting things and now you can relegate those repetitive tasks to machines…, and if any workplace believes it has a shortage of work to do, I’ve yet to find any of them. If we can relieve people of the 20% of the toil of doing stuff we can relegate to machines, I’m confident there’s 20% more work to do in these organizations.”

What keeps you up at night when it comes to AI? “I think about whether I’m moving fast enough. That’s what keeps me up at night. We all intellectually know there’s a massive unlock of productivity, of efficiency, of efficacy, experiences we can create. Are we moving fast enough. I know we have to pay attention to security and hallucinations; that’s a given. 

The biggest constraint on pace of change has typically not been around security governance; it’s been the human capacity to absorb the change. So, am I creating the right conditions for us to absorb the change? Because I firmly believe companies that fully embrace AI, are going to be the winners of tomorrow.”

What are you using for genAI platform? GPT, Llama, PaLM 2? Or are you mostly using your own homegrown LLMs or open-source models? “We have large language models in the ServiceNow platform. For a lot of the use cases, we’re using that. For certain use cases, we’re using Azure OpenAI. I think the industry will evolve a lot. We’re also dabbling with some open-source models for some data that’s highly confidential and we don’t want to take risks with it. That’s what I’m hearing from most CIOs. Yes, they’re doing stuff with the large hyperscale models, but they’re also doing some things with opencsource and they’re consuming it through platforms like ServiceNow.”

Do you see the direction genAI is going as away from hyperscale models like GPT-4 toward smaller LLMs that are domain specific? “One hundred percent the latter, 100%. That’s what we’re developing within our platform — domain-specific LLMs — specific to use cases where there’s a high density of data already within our platform and we’ve found the efficacy of those models is better than the more general-purpose models.”

It costs a lot of money to train up LLMs. How are you dealing with that? “Well, the domain specific models require less compute power. We also have a partnership with Nvidia that’s been public knowledge. We’re using Nvidia software and chips to power our LLMs and we’ve been pretty successful at it. We’re already in production with that. Again, that’s a domain-specific model, versus the economics of a general purpose LLM.”