To Build or Not To Build Your Own AI Team

Pros and cons of building your own AI team.

September 4, 2023

To Build or Not To Build Your Own AI Team

In today’s economy and labor market, there’s a great opportunity for AI to provide value when used to support existing businesses at scale, says Eric Lefebvre, CTO of Sovos. Of course, with so much attention on AI, it’s difficult to separate actionable strategies from hype and for CIOs/CTOs, it can be a never-ending series of questions coming from all angles.

Perhaps you’ve noticed that Artificial Intelligence (AI) is getting a lot of press these days. And if you haven’t, I’d welcome you back from whatever remote location you’ve been living in. 

The question of whether technology leaders should be spending limited operating expenses on AI in today’s climate is a more complex topic than I could address in this discussion, so today, we’ll stick to generative AI for business use. 

Forms of Generative AI

For those familiar with McKinsey’s CIO and CTO guide on generative AIOpens a new window , we’re focusing on the Shapers from step four. For the TL;DR crowd, there are three archetypes for AI model consumption per McKinsey; Takers, Shapers and Makers. Takers use publicly available commercialized AI solutions that are publicly available like ChatGPT, Microsoft 365 Copilot, GitHub Copilot, or Google’s Bard. Shapers use existing models connected to their own proprietary data sources, i.e., connecting Azure AI bot service to your support Wikis providing support chatbot capabilities to your customers. Makers are building foundational models for others to use and train; Open AI’s GPT4, Google’s PaLM 2 or Amazon Bedrock. Within a larger organization, there may be groups of each archetype in existence as well.

While employee-facing AI tools are quickly becoming commonplace and priced at premium levels of $20 to $30 per user per month, CIOs may get sticker shock when they see Microsoft 365 Copilot costs $360 per user per year, potentially more than their Office 365 per seat subscription, which still must be paid for as well. Like any nascent technology, we can expect prices to reduce over time, but there will be a floor as generative AI still requires a massive data center scale.

In the Taker camp, your primary focus will be intellectual property (IP) focused. Who owns the IP from the results of a consumer-focused AI tool? How do you prevent your company’s critical data and knowledge from getting into the model for training, as happened to SamsungOpens a new window ? In this instance, you’ll have little to no need for a bespoke AI Team but will need to educate existing staff in the permissible use of these tools, configure the tools to not contribute to learning, while your legal team reviews the IP ownership and risk of appropriating someone else’s IP.

If you’re in the Makers’ camp, you already have AI engineers on staff and are out building GPT5, PaLM3 or some other cutting-edge capability under Area 51 level security. You’ve been evolving technology from correlation algorithms to machine learning, and now generative AI interfaces on top of large language models.

That leaves us with the Shapers. So, you’ve decided, or been told by your CEO or board, that you need to invest your precious technology operating expense in generative AI tools. Your next step is to make sure those tools deliver a return on that investment. Let’s be absolutely clear – generative AI uses are everywhere. Every company that provides customer care, some form of research, regulatory or legal services, or has real people in their sales and marketing departments has a use case for implementing generative AI internally using the Shaper archetype. The question is not why, but how? Because if you don’t, you will lose in the long run.

See More : 3 Ways To Motivate Non-technical Teams To Use ChatGPT

Investment Across Departments

Generative AI projects should be assessed with the same rigor as any other technology investment and allow for A/B testing with cloud-based tools and contract labor to measure the effectiveness in a very short amount of time.

  • Chatbots that have the same knowledge as people can help defer headcount costs as the business grows, deflecting calls and putting the focus on the quality of the support knowledgebase, which has been a struggle in every company that ever existed. To determine the potential for ROI, find the product with the best support Wiki and connect a public cloud-based generative AI tool to train the chatbot on your support site. Support is one of the most monitored functions in a company with a mature set of metrics, so evaluating the effectiveness of the AI-enabled chatbot versus the non-AI-enabled chatbot or even a human agent should be relatively easy. The results of that effort will determine whether further investment is worthwhile, and the business case builds itself.
  • Research into regulations, patents, or any other publicly available data source, government-supported or not, can free up researchers from scraping websites for manual data collection, leaving time for validating the results of the data collection and determining application within the product set. In this case, the model needs to be trained by the subject matter experts on what government sites need to be reviewed and how regulation impacts products or clients. The most difficult aspect of this model is figuring out what controls should be used to validate the results and enhance the mode. But it has a massive payoff if done well and will help prevent attrition in the research or regulatory teams.
  • Sales and marketing functions would clearly benefit from having greater visibility into client segmentation, marketing campaign targets and sales account planning. Six months ago, this would have been on the list for a pilot, but with recent announcements from Salesforce with Einstein GPT, Microsoft’s Sales Copilot, SAP’s Business AI, People.ai with Account GPT, and AI Marketo, this space is quickly moving from the Shaper to the Taker archetype. That said, there’s still value in piloting concurrent marketing campaigns for demand generation. One that is leveraging AI and one that is traditional while also giving a vertical team AI-enabled CRM tools and training to compare against historical performance and the non-AI-enabled half of the vertical team.

In the three example use cases above, we see the challenges in defining where to place bets and funding on AI investments. Our customer care example is likely the easiest and has the cleanest business case model. Our research example will require significant planning and expertise and may be a tougher sell. The go-to-market, marketing and sales example is evaporating as we read this since all the major platform providers are rushing to embed AI into their products. This one in particular should get sent over to your learning and development team, assuming you have the tools and can fund a pilot. The onus here is now on learning how the platform provider has implemented AI into the toolset and how you can extract the most value from the increase in license cost. All the risks noted before in the Taker examples will be present here as well.

See More: AI Assistants and Platform Engineering

Piloting Your AI Team

Now that we’ve done our AI-enablement brainstorming, validated the use cases, and assessed the marketplace, we need to execute the pilot. The next hurdle is staffing. Who should we have to work on this, followed quickly by the inevitable hosting question: our data center or public cloud? The reality is that these kinds of pilots are a great opportunity to blend high performers/high potentials with external expertise for a fixed timeframe. We’ve found a solid approach is to leverage the innovation lab and architecture council with key staff from the business unit impacted, augmented with external subject matter experts.

This provides the best of all options. The innovation lab and architects bring governance and process to the pilot. The external contractors bring nascent domain knowledge, while the key staff gets the opportunity to work on something cutting edge while also getting a break from their day job, even if it is only a part-time thing. In this instance, we’re “building” an AI team from internal experts, external experts and high potentials. They come together for a well-defined scope over a generally fixed duration to achieve a specific outcome. This not only keeps startup costs low, but there is a halo effect by having high potential work with your senior staff and external experts. This pays massive dividends in retention outside of the pilot objectives.

With a pilot team now staffed, we must address the hosting question. Given the nature of a pilot, it only makes sense to leverage a public cloud solution. Regardless of which cloud provider is your preferred, they have some ability to host your AI pilot platform. This could be leveraging one of their packaged solutions or going a more traditional route of hosting a third-party model on virtual clusters. The public cloud makes sense for any approach because if the pilot fails, it can be quickly shut down and eliminate the open drain on your budget. 

Once you’ve defined the scope of your pilot, staffed with internal and external people, and used a flexible hosting stack, it’s time to implement, train the model, educate the people impacted, and measure the results. This could be as quick as a few weeks or take a few quarters, and at some point, you’ll have to bring all your stakeholders back and review the results. The decisions made in that meeting will determine your next steps. If the pilot was underwhelming, you might have the opportunity to refine and measure again, or you say shut it down.

Assuming the happy path and the pilot delivered or exceeded the acceptance criteria, you’re now at the inflection point. You will not only have to staff to keep the pilot going, but perhaps even expand to other cohorts. For instance, in the customer care example, you’ll be the victim of your own success with an onslaught of new requests for AI enablement as every general manager is looking to grow the top line and take out operating expenses. That’s when you know it’s time to pull out the staffing plan to build a bespoke AI team, not based on conjecture, but instead of tangible benefit to your organization.

Are you building your own AI team? What are your key considerations? Share with us on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON ARTIFICIAL INTELLIGENCE (AI)

Eric Lefebvre
As chief technology officer (CTO), Eric sets and oversees technology strategy for Sovos. With more than 25 years of experience leading technology teams, he is a strong proponent for establishing a corporate vision and then providing his teams with the room to work, ensuring they have the freedom to tap into their full potential.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.