It's common to focus too much on building up models and data science teams rather than operationalizing the models to drive the bottom line. How do you strike the right balance?

Vid Jain, CEO & Founder, Wallaroo Labs

June 2, 2022

4 Min Read
female data scientist reviewing data on large screens
Artem via Adobe Stock

Enterprises have poured billions of dollars into artificial intelligence based on promises around increased automation, personalizing the customer experience at scale, or delivering more accurate predictions to drive revenue or optimize operating costs. As the expectations for these projects have grown, organizations have been hiring more and more data scientists to build ML models. But so far there has been a massive gap between AI’s potential and the outcomes, with only about 10% of AI investments yielding significant ROI.

When I was part of the automated trading business for one of the top investment banks a decade ago, we saw that finding patterns in the data and building models (aka, algorithms) was the easier part vs. operationalizing the models. The hard part was quickly deploying the models against live market data, running them efficiently so the compute cost didn’t outweigh the investment gains, and then measuring their performance so we could immediately pull the plug on any bad trading algorithms while continuously iterating and improving the best algorithms (generating P&L). This is what I call “the last mile of machine learning.”

The Missing ROI: The Challenge of the Last Mile

Today, line of business leaders and chief data and analytics officers tell my team how they have reached the point that hiring more data scientists isn’t producing business value. Yes, expert data scientists are needed to develop and improve machine learning algorithms. Yet, as we started asking questions to identify the blockers to extracting value from their AI, they quickly realized their bottleneck was actually at the last mile, after the initial model development.

As AI teams moved from development to production, data scientists were being asked to spend more and more time on “infrastructure plumbing” issues. In addition, they didn't have the tools to troubleshoot models that were in production or answer business questions about model performance, so they were also spending more and more time on ad hoc queries to gather and aggregate production data so they could at least do some basic analysis of the production models. The result was that models were taking days and weeks (or, for large, complex datasets, even months) to get into production, data science teams were flying blind in the production environment, and while the teams were growing they weren't doing the things they were really good at.

Data scientists excel at turning data into models that help solve business problems and make business decisions. But the expertise and skills required to build great models aren't the same skills needed to push those models in the real world with production-ready code, and then monitor and update on an ongoing basis.

Enter the ML Engineers…

ML engineers are responsible for integrating tools and frameworks together to ensure the data, data engineering pipelines, and key infrastructure are working cohesively to productionize ML models at scale. Adding these engineers to teams helps put the focus back on the model development and management for the data scientists and alleviates some of the pressures in AI teams. But even with the best ML engineers, enterprises face three major problems to scaling AI:

  1. The inability to hire ML engineers fast enough: Even with ML engineers taking over many of the plumbing issues, scaling your AI means scaling your engineers, and that breaks down quickly. Demand for ML engineers has become intense, with job openings for ML engineers growing 30x faster than IT services as a whole. Instead of waiting months or even years to fill these roles, AI teams need to find a way to support more ML models and use cases without a linear increase in ML engineering headcount. But this brings the second bottleneck …

  2. The lack of a repeatable, scalable process for deploying models no matter where or how a model was built: The reality of the modern enterprise data ecosystem is that different business units use different data platforms based on the data and tech requirements for their use cases (for example, the product team might need to support streaming data whereas finance needs a simple querying interface for non-technical users). Additionally, data science is a function often dispersed into the business units themselves rather than a centralized practice. Each of these different data science teams in turn usually have their own preferred model training framework based on the use cases they are solving for, meaning a one-size-fits-all training framework for the entire enterprise may not be tenable.

  3. Putting too much emphasis on building models instead of monitoring and improving model performance. Just as software development engineers need to monitor their code in production, ML engineers need to monitor the health and performance of their infrastructure and their models, respectively, once deployed in production and operating on real-world-data to mature and scale their AI and ML initiatives.

To really take their AI to the next level, today’s enterprises need to focus on the people and tools that can productionize ML models at scale. This means shifting attention away from ever-expanding data science teams and taking a close look at where the true bottlenecks lie. Only then will they begin to see the business value they set out to achieve with their ML projects in the first place.

About the Author(s)

Vid Jain

CEO & Founder, Wallaroo Labs

Vid Jain holds a Ph.D. in Theoretical Physics from UC Berkeley, is the author of 3 internet tracking patents, and has spent the last 20 years pushing the technology envelope with data-driven applications. He was co-founder of an adtech startup that focused on interactive advertising. Following that, he joined the trading business of Merrill Lynch, where he was part of a small team that automated the firm’s trading operations with ultrafast algorithms and helped build a $1Billion annual revenue business. Vid is currently the CEO & founder of Wallaroo Labs, based in NYC, that makes it easy for AI teams to deploy, observe, and manage machine learning models in production at scale

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights