Americas

  • United States

Asia

lucas_mearian
Senior Reporter

Q&A: NY Life exec says AI will reboot hiring, training, change management

feature
Nov 14, 202317 mins
Artificial IntelligenceAugmented RealityChatbots

During its 178 years, New York Life has had to adapt many times; now, AI is affecting nearly every corner of the insurance business, from hiring to client services, says Alex Cook, senior vice president and head of strategic capabilities at the firm.

In 2015, New York Life Insurance Co. began building up a data science team to investigate the use of predictive models to improve efficiency and increase productivity.

There were quite a few deployments of predictive models across the company with a little artificial intelligence to aid in automation. Most of the projects were not centered around machine learning and AI but traditional data science. Models were generally used to support actuarial assumptions, to aid agent recruiting, and enhance the purchase experience (e.g., bypassing the need for blood tests for underwriting).

General AI was also used in creating marketing campaigns to determine the most appropriate audiences to target.

In November 2022, everything changed. San Francisco start-up OpenAI launched ChatGPT, a large language model (LLM)-based chatbot that could be used by enterprises to automate all kinds of tasks and scour internal documents for valuable content. It could be used to summarize email and text threads, online meetings and perform advanced data analytics.

Agents and service representatives could use the chatbot technology to obtain detailed answers for clients in a fraction of the time it normally would. New employees no longer needed months to be brought up to speed and instead could be trained to use generative AI to find the information they needed to do their jobs.

Last month, New York Life announced the hire of Don Vu, a veteran data and analytics executive, to lead a newly formed artificial intelligence (AI) and data team with responsibility for AI, data, and insights capabilities and aligning data architecture with business architecture in support of the company’s business strategy and objectives.

His work will be “essential” to the 178-year-old company’s future strategies around AI and the desire to created industry-leading experiences for customers, agents, advisors, and employees.

Alex Cook, senior vice president and head of Strategic Capabilities at New York Life, created the company’s data science team eight years ago. Cook sits on the company’s executive management committee, and his responsibilities include enterprise-wide technology, data, and AI as well as strategy and development and overseeing New York Life’s corporate venture capital group.

New York Life's Alex Cook New York Life

New York Life’s Alex Cook

Cook spoke to Computerworld about the company’s AI investments and how genAI is changing its approach to internal skills needs and hiring. The following are excerpts from that intreview.

What are some of the biggest AI challenges at New York Life? “One of the challenges is just ensuring as we’re building out some of these capabilities that we’re doing it in places where it really makes a difference. It’s easy to see a shiny object and for people to get excited about it. And there are so many potential applications. We need to stay focused on those things that really move the needle for the company and for our clients and agents and make the experience client-agent experience better. That’s a really critical focus for us right now. And, we need to ensure that it’s a function that’s really feasible as opposed to something that once you dig into it, you don’t have the right conditions for success.

“There’s another workstream I’m very focused on — talent and change management. It’s really critical that we have the right people for that. Don [Vu] is a good example of getting the right people on board who understand how to do this well. The change management — managing that for the enterprise needs to be a critical focus for any enterprise, because it has so much impact in so many areas. It’s not just new skills that will have to take on new capabilities, but managing the change when some people will be augmented, and some will be displaced by some of these tools. How do you manage the change effectively? That’s been a critical focus for it.

“Governance is another one. [There’s] a whole ethical AI focus that continues with generative AI. How do we build these things and build them well? How do we do the right kind of testing for unintended bias? It’s critical we do the right testing for accuracy and making sure that’s well understood and governed.

“And we’ve really thought a lot about different scenarios and the planning for … where things could go from here and trying to make sure the company is prepared….

How has New York Life been using genAI?  “We have [AI] models we use to aid in making decisions when hiring agents and advisors, which of them will be more likely to succeed in their career?

“There’s quite a lot AI use in the marketing space. That’s where there’s a bit more use of AI as opposed to more statistical models. In general at New York Life, I’d say the focus has been wherever we’re applying data science or AI, if it’s in the realm of a decision that could impact a client, we’re verry careful to make sure those aren’t black box models. We want to make sure they’re explainable. But it’s particularly important when you’re talking about underwriting — determining somebody’s risk class. You’ve got to make sure the data is relevant and that the decision is based on factors you understand.

“That’s an important baseline. As a mutual life insurance company, we really do have tight alignment of interests — particularly with our core policy holders who are the recipient at the end of the dividends we deliver.

“In that context, we’ve had a lot of focus on ethical AI and ensuring we’re appropriately reviewing the data that’s used to develop and run models. As a heavily regulated industry, we have a lot of focus on the patchwork of regulation on the federal and state level. So, we need to make sure we’re on top of any regulation coming in from the states around use of AI and data. And then we have our own standards that ensure we’re not just in compliance with specific regulators, but with our own standard of practice.”

President Biden recently announced an executive order restricting how AI could be used. Especially in financial services, do these rules advance anything or were existing regulations already enough to deal with AI’s issues? “I think exiting rules are very much a patchwork. If you look at the areas regulators have been focused on, like underwriting, different states have different levels of understanding. I do think it’s important that regulators come up the curve with AI, and generative AI in particular, …just in terms of understanding how these technologies work and how to govern them well. So, I do think it’s a good thing that regulators are starting dig in and educate themselves on what these tools can do. I don’t know that we’ll need a ton of incremental regulation above and beyond what we have today, but there are cases when it’s important to understand the underlying context.

“For example, [take] some things we do, particularly in the insurance domain. Underwriting by its very nature is a discriminatory practice, meaning you’re trying to understand differences in health when, for example, you’re attempting to issue a life insurance policy. This is not a mandated product; it’s completely voluntary.

“So, it’s important we’re able to retain the use of some information in making that determination. And some regulators are confused about elements of that. For example, in some earlier discussions with regulators, [they said], ‘Gee, if someone’s disabled, you can’t use that to discriminate against them.’ Well, if you’re issuing disability insurance, you do have to take that into consideration or we won’t be in business long.

“I do think it’s important regulators understand as they step into regulating some of these new technologies that they don’t take inadvertent steps or misunderstand what these models can do.”

What changed for New York Life last November when ChatGPT was launched? “I think the biggest thing was recognizing the potential for these new approaches to really enhance things we’d been dabbling at, but clearly the quality wasn’t sufficiently high. Chatbots are a great example. Up until that point in time, chatbots were very limited in what they could do and often were more frustrating for clients than helpful.

“I think that’s very different now. I think the capabilities of chatbots have taken a step-function forward and they’ll continue to improve over ensuing months and years at a very fast pace. So, for me, it was a wake-up call.

“It’s a bit like the analogy of the frog in boiling water. If you put the frog in and slowly turn up the heat, they don’t realize how much is happening. That’s a bit of what’s happening in the AI context. It had been slowly advancing for a long time, but then it took a big step forward and with that there was a recognition — a moment had arrived that was worthy of reassessing the scope of what was feasible.”

How are you preparing your employees and getting staffed with AI skills? “Both external hiring and internal training are critical. We have a lot of focus on training opportunities for different types of individuals to learn about this technology and have a role in its development.

“Typically, we have a lot of subject matter expertise that needs to be employed when you are developing these models. It’s not enough to say, here’s a treasure trove of thousands of documents and point the AI at them and have it summarize them with great accuracy. That’s not the way it happens. You have to have people go through those documents to understand what’s in all those documents. Have they been appropriately tagged with metadata that will give the models some direction on what to source in response to a question?

“There’s a need for people to really be a part of that development process; that will be true for quite some time. Then you’ve got the whole dynamic of prompt engineering and how to get smarter about how to ask these models and do so in a more iterative way.

“As we engage in AI development, we’re also engaging in what competencies we need in our existing staff. What opportunities will there be in how we change the nature of their role and support them in that effort? There’s a lot of focus on that in our HR department.

“At the same time, we understand that we do need to bring in external talent to help ensure we’re moving quickly in this space, because it will be developing fast. As we look at machine learning operations, we look at LLM ops even and really understanding the tech stack, there’s a real need for some people who have proficiency in those areas and making sure we’re bringing that talent into the company.”

How do you address angst around AI taking people’s jobs? Do you see it eliminating more jobs that it creates? “I think it’s mostly the latter. Meaning, I see a lot of these AI technologies primarily being [augmentative] for people and allowing them to focus their skills and efforts on things more complex.

“We really do need human engagement and empathy. I think that’s something we definitely see with our agents and advisors. Their role may change to become much more relationship driven and perhaps a little less technical, as that will be covered more by AI. It will be similar with our service reps; their job becomes much more focused on ensuring the client or agent is holistic and becoming more forward thinking about how they’re delivering the right experience; and again, some of the most technical aspects of their tasks will be covered more by AI assistants.”

Will AI displace workers? “There will be some displacement, especially with the historical practice of bringing new people in at an entry level and they learn the ropes through the simpler stuff for some time and then expand their product knowledge over time. I think that route is going to get closed in a bit, and we’ve already encountered some of that over the last few years.

“You really need to upgrade your recruiting and training because the nature of the role on day one is different than it used to be years ago. It’s less about how you come up to speed quickly to a detailed knowledge that’s more technical, to learning more about the right management and relationship skills. And, learning the skills around how you best avail yourself to the technology that will enable you in your job.

“So, it does put a different emphasis on that training. I think there’s going to be a need for a lot of people to help develop AI, and I think there’s a lot of excitement around our existing folks helping to develop this next set of capabilities. For the most part, I think a lot of them will be thankful that they don’t have to engage in a lot of the more mundane tasks they used to do.

“There are implications for our rate of hiring in some of these kinds of roles. I think every company will be facing that dilemma to some degree, and it will have an impact on the job market. For the most part, as they have in the past, people will find new ways to use these tools that are already being created.

“New technology on the margin may have some degree of displacement, but very often it augments and then people find better things to do. I think we’re going to see that here as well; it’s just the pace of change may be a bit faster compared with other technologies of the past.”

Are you using external AI models or are you developing domain-specific internal models to address your business-specific needs, and what security concerns to you have? “As we started to understand some of the potential of generative AI, earlier this year we formed a steering committee, which I chair, to ensure we had a tight focus on multiple dimensions. One of those dimensions was around enablement, which is making sure from a tech-stack perspective we had access to a set of models — OpenAI’s model, Anthropic’s model — from either Microsoft Azure or Amazon’s Web Services, and that we had the right security review around those web access models. We wanted to ensure we could start using them in the context of our proprietary data so that we could be comfortable that information wasn’t going to get used to train one of those models [and] inadvertently have some of that information end up getting leaked somehow.”

A lot of the focus today in enterprises seems to be more toward smaller, domain specific LLMs developed in-house versus the more general and amorphous LLMs like GPT-4, Llama 2 and Palm 2. Is that the direction New York Life is headed? “We’re trying to take advantage of what’s publicly available; there are lots of places where employees or agents would use something like a ChatGPT to help them in their role, and this is where we’ve been leaning into training folks rather than saying, ‘You cannot use it because it’s going to be harder and harder to contain.’ You’re going to be better off educating people.

“So we said you can use those tools, but you cannot use any PII or PHI or confidential information and we introduced scanning tools to look at any flow of information from our networks to prevent any inadvertent use of those tools for that purpose.

“Then you move into proprietary use cases, so development where you’re actually taking those models and using them with the intent of saying we’ve got a huge amount of our own internal data that we want to use with those models. At this stage, our approach has been to develop and train our own models. We’re still using what is available from companies like OpenAI and Anthropic. We’re also working with a few different models to test them out, such as Llama 2 and the Claude models from Anthropic, as well as the GPT models from OpenAI. What we’re doing is ensuring we’re constructing those with a focus primarily on use cases around knowledge management — so, like a lot of other companies, tools to aid service reps and our agent advisors. We’ve got decades of policies still on the books that are still active. All of the different options in a lot of those products, it’s hard for anyone in our organization today to be able to say about that policy written 30 years ago, ‘Here are all the different options and features you could use.’

“So a lot of our focus has been on standing up a generative AI conversational interface to a lot of that historical policy feature set, and other support areas within our service organization. So it’s a tool that should help our service reps be more productive and limit the degree to which they get a call from a client or an agent and have to say, ‘I can’t answer that one. Let me put you on hold while I go find someone who can.”

Internal, domain-specific chatbots — is this the kind of technology that will allow a new employee to quickly come up to speed and answer client questions that might have taken months of training to do before? “Much faster than what would have been the case in the past. It enables relatively inexperienced service reps to respond to questions of clients much faster than they would otherwise — and without the need to so frequently tap into a group of experts to get the answer to relay back.

“It’s really acting as an internal expert assistant, so that when you have complex questions being asked by clients or agents, the service rep is in a good position to respond to that directly.

Did you find the contracts with genAI providers offer the same data protections you have with other vendor providers? “We pretty quickly bifurcated between the use of tools like ChatGPT that are publicly available, and OpenAI, which has progressed from their original release of ChatGPT. They were taking all the information people were putting in — all the prompts and responses — and using that to train the model. Then they [OpenAI] launched their subscription service that you can purchase and there’s an option there to not have the prompt and response included in training their models. But you have to pay for that, effectively. And, it led into an enterprise option for OpenAI that similarly has those kinds of protections. So, they’ve come along in their development.

“We were working with Microsoft Azure and their APIs for access to OpenAI’s models — GPT 3.5 and GPT 4. And via Microsoft Azure, they have the right kind of licensing to ensure none of your information will be used for training their model or otherwise retain or expose it. As soon as you start to work with AWS or Azure, they’ll typically have that kind of contracting capability and platform capability to ensure you’re able to monitor that usage.”