Skip to main content

IBM’s CodeFlare automates AI model development

IBM logo is seen on a smartphone and a pc screen.
IBM logo is seen on a smartphone and a pc screen.
Image Credit: Photo Illustration by Pavlo Gonchar/SOPA Images/LightRocket via Getty Images

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Let the OSS Enterprise newsletter guide your open source journey! Sign up here.

IBM today announced a new serverless framework called CodeFlare that’s designed to reduce the time developers spend preparing AI models for deployment in hybrid cloud environments. The company says it automates the training, processing, and scaling of models to enable engineers to focus on data insights.

Data and machine learning analytics are proliferating across industries, with the tasks becoming increasingly complex. Larger datasets and systems tailored for AI research naturally become more involved, forcing researchers to spend more time configuring their setups. For example, to create a machine learning model today, researchers and developers have to first train and optimize the model, which involves data cleaning, normalization, feature extraction (i.e., reducing the number of resources required to describe a dataset), and more.

CodeFlare is aimed at simplifying the AI iteration process with specific elements for scaling data workflows. It’s built on top of Ray, the University of California, Berkeley RISE Lab’s open source distributed computing system for AI apps, and it emerged from a project in the IBM group responsible for creating one of the world’s first prototype 2-nanometer chips. (Interestingly, the creators of Ray founded the startup Anyscale, which offers managed services powered by the technology.)

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

“CodeFlare takes the notion of simplified machine learning … one step further, going beyond isolated steps to seamlessly integrate end-to-end pipelines with a data scientist friendly interface — like Python, not containers,” Priya Nagpurkar, director of hybrid cloud platform at IBM Research, told VentureBeat via email. “CodeFlare differentiates itself by making it simpler to integrate and scale full pipelines with a unified runtime and programming interface.”

Distributed workflows

CodeFlare offers a Python-based interface for managing pipelines across multiple platforms. Pipelines can be parallelized and shared within most compute environments, as well as integrated and bridged with other cloud-native ecosystems via adapters. Trigger functionality enables CodeFlare pipelines to be kicked off when certain events occur, like the arrival of a new file, while support for loading and partitioning lets the pipelines draw on a range of data sources, including filesystems, object storage, data lakes, and distributed filesystems.

In this respect, CodeFlare is similar to Amazon SageMaker Pipelines, which help automate and organize the flow of machine learning pipelines from a cloud dashboard. Google, Microsoft, and Hybernet Labs offer comparable services in Cloud AI Platform Pipelines, Azure Machine Learning pipelines, and Galileo, respectively. But IBM asserts CodeFlare was built from the ground up to support hybrid clouds, which tap a combination of on-premises and cloud infrastructure.

“The motivation behind the framework is the emergence of converged workflows, combining AI and machine learning, data analytics and modeling, and the increasing complexity in integrating modalities beyond individual steps,” Nagpurkar said. “We saw an opportunity to significantly optimize pipelines under a common runtime, where data dependencies and execution control can be efficiently managed and optimized.”

IBM claims CodeFlare can cut the time to execute analysis and optimization of 100,000 training pipelines from four hours to 15 minutes. The company says it’s working with customers to integrate CodeFlare into their software streams, as well as using it internally in its own AI research.

As of today, CodeFlare is available in open source, along with a series of technical blog posts on how it works and what developers need to get started. Going forward, IBM plans to continue evolving CodeFlare to support more complex pipelines and capabilities like fault-tolerance and consistency, integration and data management for external sources, and support for pipeline visualization.

“Enabling a unified experience to scale pipelines from a laptop to a small cluster to the cloud is a major focus for CodeFlare,” Nagpurkar said. “We see CodeFlare as one of the key next steps in the evolution of our hybrid cloud platform. In terms of value to users, it is important to highlight that by significantly improving efficiency, CodeFlare enables not only cost and time savings, but it also creates the opportunity to tackle new use cases that were previously simply impractical due to size, scale, or complexity.”

The launch of CodeFlare comes as AI adoption accelerates among the enterprise, a trend attributable in part to the pandemic. A survey by KPMG suggests a large number of organizations have increased their investments to the point that they’re concerned about moving too fast. At the current pace, McKinsey forecasts that AI could contribute an additional 1.2% to gross domestic product growth for the next 10 years.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.