Americas

  • United States

Asia

Apple launches MLX machine-learning framework for Apple Silicon

news
Dec 06, 20234 mins
AppleArtificial IntelligenceEnterprise Applications

Apple’s machine learning (ML) teams quietly flexed their muscle with the release of a new ML framework developed for Apple Silicon.

M3 chip against a platter of processors

Apple’s machine learning (ML) teams have released a new ML framework for Apple Silicon: MLX, or ML Explore arrives after being tested over the summer and is now  available through GitHub.

Machine Learning for Apple Silicon

In an X-note, Awni Hannun, of Apple’s ML team, calls the software: “…an efficient machine learning framework specifically designed for Apple silicon (i.e. your laptop!)”

The idea is that it streamlines training and deployment of ML models for researchers who use Apple hardware. MLX is a NumPy-like array framework designed for efficient and flexible machine learning on Apple’s processors.

This isn’t a consumer-facing tool; it equips developers with what appears to be a powerful environment within which to build ML models. The company also seems to have worked to embrace the languages developers want to use, rather than force a language on them – and it apparently invented powerful LLM tools in the process.

Familiar to developers

MLX design is inspired by existing frameworks such as PyTorchJax, and ArrayFire. However, MLX adds support for a unified memory model, which means arrays live in shared memory and operations can be performed on any of the supported device types without performing data copies.

The team explains: “The Python API closely follows NumPy with a few exceptions. MLX also has a fully featured C++ API which closely follows the Python API.”

Notes accompanying the release also say:

“The framework is intended to be user-friendly, but still efficient to train and deploy models…. We intend to make it easy for researchers to extend and improve MLX with the goal of quickly exploring new ideas.”

Pretty good at first glance

On first glance, MLX seems relatively good and (as explained on GitHub) is equipped with several features that set it apart — for example, the use of familiar APIs, and also:

  • Composable function transformations: MLX has composable function transformations for automatic differentiation, automatic vectorization, and computation graph optimization.
  • Lazy computation: Computations in MLX are lazy. Arrays are only materialized when needed.
  • Dynamic graph construction: Computation graphs in MLX are built dynamically. Changing the shapes of function arguments does not trigger slow compilations, and debugging is simple and intuitive.
  • Multi-device: Operations can run on any of the supported devices (currently, the CPU and GPU).
  • Unified memory: Under the unified memory model, arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without moving data.

What it can already achieve

Apple has provided a collection of examples of what MLX can do. These appear to confirm the company now has a highly-efficient language model, powerful tools for image generation using Stable Diffusion, and highly accurate speech recognition. This tallies with claims earlier this year, and some speculation concerning infinite virtual world creation for future Vision Pro experiences.

Examples include:

  • Train a Transformer LM or fine-tune with LoRA.
  • Text generation with Mistral.
  • Image generation with Stable Diffusion.
  • Speech recognition with Whisper.

Developers, developers….

Ultimately, Apple seems to want to democratize machine learning. “MLX is designed by machine learning researchers for machine learning researchers,” the team explains.

In other words, Apple has recognized the need to build open, easy-to-use development environments for machine learning in order to nurture further work in that space.

That MLX lives on Apple Silicon is also important, given that Apple’s processors now live across all its products, including Mac, iPhone, and iPad. The use of the GPU, CPU, and — conceivably, at some point — Neural Engine on those chips could translate into on-device execution of ML models (for privacy) with performance other processors cannot match, at least not in terms of edge devices.

Is it too little, too late?

Given the big buzz that emerged around Open AI’s Chat GPT when it appeared around this time last year, is Apple really truly late to the party? I don’t think so.

The company has clearly decided to place its focus on equipping ML researchers with the best tools it can make, including powerful M3 Macs to build models on.

Now, it wants to translate that attention into viable, human-focused AI tools for the rest of us to enjoy. It is much too early to declare Apple defeated in an AI industry war that has really only just begun.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

jonny_evans

Hello, and thanks for dropping in. I'm pleased to meet you. I'm Jonny Evans, and I've been writing (mainly about Apple) since 1999. These days I write my daily AppleHolic blog at Computerworld.com, where I explore Apple's growing identity in the enterprise. You can also keep up with my work at AppleMust, and follow me on Mastodon, LinkedIn and (maybe) Twitter.