Skip to main content

Microsoft launches Project Bonsai, an AI development platform for industrial systems

Microsoft Project Bonsai
Image Credit: Microsoft

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Microsoft announced the public preview of Project Bonsai, a platform for building autonomous industrial control systems, during its Build 2020 online conference. The company also debuted an experimental platform called Project Moab that’s designed to familiarize engineers and developers with Bonsai’s functionality.

Project Bonsai is a “machine teaching” service that combines machine learning, calibration, and optimization to bring autonomy to the control systems at the heart of robotic arms, bulldozer blades, forklifts, underground drills, rescue vehicles, wind and solar farms, and more. Control systems form a core component of machinery across sectors like manufacturing, chemical processing, construction, energy, and mining, helping manage everything from electrical substations and HVAC installations to fleets of factory floor robots. But developing AI and machine learning algorithms atop them — algorithms that could tackle processes previously too challenging to automate — requires expertise.

Project Bonsai attempts to marry this expertise with a powerful simulation toolkit hosted on Microsoft Azure.

Ramping up industries

At a high level, Project Bonsai’s aim is to hasten the arrival of “Industry 4.0,” an industrial transformation Microsoft defines as the infusion of intelligence, connectivity, and automation into the physical world. Beyond new technology, Industry 4.0 entails new ecosystems and strategies that leverage AI to great gain. Microsoft cites a World Economic Forum study that found 50% of organizations embracing AI within the next seven years might double their cash flow.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

For manufacturers in the transitional phase, often the end goal is to attain “prescriptive” intelligence, where adaptive, self-optimizing technology and processes help equipment and machinery adjust to changing inputs and conditions. Existing control systems have a limitation in that they operate on a set of deterministic instructions within predictable, unchanging environments. Next-generation control systems tap AI to go beyond basic automation, adjusting in real time to changing environments or inputs and even optimizing toward multiple goals.

Project Bonsai is designed to create these systems, which also adopt a combination of digital feedback loops and human experience to inform actions and recommendations. Historical data drives particular operations and product improvements, enabling systems to complete tasks like calibration more quickly and precisely than human operators.

Machine teaching and simulation

Project Bonsai is an outgrowth of Microsoft’s 2018 acquisition of Berkeley, California-based Bonsai, which previously received funding from the company’s venture capital arm M12. Bonsai is the brainchild of former Microsoft engineers Keen Browne and Mark Hammond, who’s now the general manager of business AI at Microsoft. The pair developed an approach on Google’s TensorFlow framework that abstracts low-level AI mechanics, enabling subject-matter experts to train autonomous systems to achieve goals — regardless of AI aptitude.

In September 2017, Bonsai established a new benchmark for autonomous industrial control systems, successfully training a robot arm to grasp and stack blocks in simulation. It performed a claimed 45 times faster than a comparable approach from Alphabet’s DeepMind.

Microsoft refers to the abstraction process as machine teaching. Its central tenant is problem-solving by breaking down workloads into simpler concepts (or subconcepts) and then individually training them before combining them. This technique is also known as hierarchical deep reinforcement learning, when AI learns by executing decisions and receiving rewards for actions that bring it closer to a goal. The company claims this technique can decrease training time while allowing developers to reuse concepts.

Microsoft Project Bonsai

For example, in a warehouse and logistics scenario, an engineering team could use machine teaching to train autonomous forklifts. Engineers would start with simpler skills like aligning with a pallet, and building on that they’d teach the forklift to drive toward the pallet, pick it up, and set it down. Ultimately, the autonomous forklift would learn to detect other people and equipment and return to its charging station.

“There’s a joke in reinforcement learning among researchers that goes something like this: If you have a problem and you model it like a reinforcement learning problem, now you have two problems,” Microsoft CVP Gurdeep Pall told VentureBeat in a phone interview. “It’s a very complex field. It’s not just about selecting the right algorithm — continuous versus discrete, on-policy versus off-policy, model-based versus model-free, and hybrid models — but rewards.”

As Pall explained, rewards in reinforcement learning describe every correct step that an AI tries. Crafting these rewards — which must be expressed mathematically — is difficult because they have to capture every nuance of multistep tasks. And improperly crafted rewards can result in catastrophic forgetting, where a model completely and abruptly forgets the information it previously learned.

Microsoft Project Bonsai

“What machine teaching does is that it takes a lot of these hard problems and really puts the problem on rails. It constrains how you specify the problem,” added Pall. “The [Bonsai platform] automatically selects the algorithm and [parameters] … from a whole suite of options, and it has abstraction goals, which rather than requiring a user to specify a reward, instead has them specify the outcome they want to achieve. Given a state space and this outcome, we automatically figure out a reward function against which we train the reinforcement learning algorithm.”

Project Bonsai’s general purpose reinforcement learning platform orchestrates AI model development. It provides access to algorithms and infrastructure both for model deployment and training, and it allows models to be deployed on-premises, on-device, or in the cloud with support for simulators like MATLAB Simulink, Transys, Gazebo, and AnyLogic. (On-premises deployments require a controller companion to interface with the controller computer in real time.) From a dashboard, Bonsai customers can view all active jobs — called BRAINs — as well as their training status and ways to debug, inspect, and refine models. And they can collaborate with colleagues to collaboratively build and deploy new models.

It’s a largely hands-off process. After concepts are programmed into a model using Project Bonsai’s special-purpose programming language, Inkling, the code is combined with a simulation of a real-world system and fed into the Bonsai AI Engine for training. The engine automatically selects the best algorithm to train a model, laying out the neural networks and tuning their parameters. And the platform runs multiple simulations in parallel to reduce training time, streaming predictions from trained models to software or hardware through Bonsai-provided libraries.

Microsoft Project Bonsai

Bonsai adopts a “digital twin” approach to simulation — an approach that has gained currency in other domains. For instance, London-based SenSat helps clients in construction, mining, energy, and other industries create models of locations relevant to projects they’re working on, translating the real world into a version that can be understood by machines. GE offers technology that allows companies to model digital twins of actual machines, whose performance is closely tracked. Oracle has services that rely on virtual representations of objects, equipment, and work environments. And Microsoft itself provides Azure Digital Twins, which models the relationships and interactions between people, places, and devices in simulated environments.

Within Project Bonsai’s platform, a model learning to control a bulldozer, for instance, would receive information about the variables in the simulated environment — like the type of dirt or proximity of people walking nearby — before deciding on actions. These decisions would improve over time to maximize the reward, and domain experts could tweak the system to arrive at a solution that works.

It’s akin to — albeit ostensibly easier to use than — Microsoft’s AirSim framework for Unity, which taps machine learning to simulate environments with realistic physics for systems-testing drones, cars, and more. Like the Project Bonsai platform, it’s intended to be used as a safe, repeatable proving ground for autonomous machines — in other words, a means of collecting data prior to real-world prototyping. In a recent technical paper, Microsoft researchers demonstrated how AirSim could be used to train and transfer drone-controlling AI from simulation to the real world, bridging the simulation-reality gap.

Microsoft Project Bonsai

Microsoft says that Bonsai simulations — which are hosted on Azure — can replicate millions of different real-world scenarios that a system might encounter, including edge cases like a sensor and component failure. Post-training, models can be deployed either in a decision support capacity, in which they integrate with existing monitoring software to provide recommendations and predictions, or with direct decision authority, such that the models develop solutions to challenging situations.

Project Moab

To onboard engineers and developers keen to begin experimenting with the Bonsai, Microsoft created Project Moab, a new hardware kit that’s available as a simulator in MathWorks and soon a physical kit for 3D printers. (Developers who don’t wish to print it themselves will be able to purchase fully assembled units later in the year. ) It’s a three-armed robot with a joystick controller that attempts to keep a ball balanced on a magnet-attached transparent plate, and it’s intended to give users an environment in which they can learn and experiment with simulations.

Microsoft Project Moab

Ball balancing is a classic mechanical engineering challenge that’s known as a regulator-type control problem. Given any condition, a self-balancing system must learn a control signal to produce the desired final state — i.e., a ball brought to rest at the center of the platform. Most classical ways of solving it involve differential equations, which represent physical quantities and their rates of change. But Project Moab seeks to tease out machine learning solutions to the problem.

It’s more challenging than it might sound, because any ball-balancing system must be able to generalize —  that is, construct a robust control law on the basis of training data. Achieving good generalization requires generating a sufficiently rich set of inputs during the training phase. Failing to generate a diversity of inputs will result in poor performance.

Microsoft Project Moab

Why build a kit around this problem as opposed to another? According to Hammond, the Project Moab team wanted to pick a device engineers and developers could use to learn the steps they’d have to accomplish if they were to build an autonomous system. With Moab, developers have to employ simulators to model physical systems and incorporate those into a training regime. As for engineers, many of whom are likely familiar with classical solutions to the ball-balancing problem, they have to learn to solve it with AI.

“We’re giving people more tools in their tool chest that they can use to expand the spectrum of problems they can solve,” said Hammond. “You can very quickly take it into areas where doing it in traditional ways would not be easy, such as balancing an egg instead. The point of the Project Moab system is to provide that playground where engineers tackling various problems can learn how to use the tooling and simulation models. Once they understand the concepts, they can apply it to their novel use case.”

Microsoft Project Moab

Project Moab’s tutorials delve into more than balancing balls. Moab can be taught to catch balls thrown toward it after they bounce on a table, and to rebalance balls disturbed after an object like a pencil pokes at them. It can also learn to balance objects while ensuring they don’t come into contact with obstacles placed on the plate, sort of like a self-contained game of labyrinth.

Most of Moab’s components — including the plate and arm-controlling actuators — are interchangeable. Developers can install more powerful actuators to have Moab throw things as well as catch them, for instance. And with the software development kit, other simulation products and custom simulations can be used to train Moab to accomplish more challenging tasks.

Microsoft Project Moab

Hammond wouldn’t rule out future robotics kits for Bonsai, but he said it would largely depend on the community and their response to Moab. “We want the community to have the ability to experiment and do all sorts of fun, novel things that people hadn’t thought of before,” said Hammond. “Making [a project like this] open source makes that possible.”

Project Bonsai in the real world

SCG is among the companies that tapped Project Bonsai to imbue their industrial control systems with machine learning. SCG’s chemical division created a simulation within the Bonsai platform to speed up the process of optimizing petrochemical sequences, to the tune of 100,000 simulations per day, each modeling millions of scenarios. Microsoft claims the fully trained model is able to develop a sequence in a week, whereas it previously required several months for a group of experienced engineers.

“Polymers are designed with a particular application in mind. In order to figure out the stages of manufacturing, you need to know the mixing, temperature, and other factors,” said Pall. “The process of coming up with a plan of how a polymer can be manufactured takes six months, traditionally, because it’s done inside a simulator with a human expert guiding the simulator and trying a step, eventually getting it right, and then moving on to the next step. Bonsai found a BRAIN that surfaces solutions to the manufacturability of a given polymer and then controls machines to produce it.”

SCG has the distinction of being the first to deploy a Bonsai-trained model into production, according to Microsoft. With respect to pricing, the machine teaching component of Bonsai is available at no cost to customers, but the simulations performed in Azure are billed according to usage. Companies must purchase a commercial license if they decide to use their models in the real world.

Siemens tapped Project Bonsai for another purpose: calibrating its CNC machines. Previously, this was a manual process that required an average of 20 to 25 iterative steps over more than two hours, typically under the supervision of third-party experts. By contrast, the Project Bonsai solution is designed to automate the machine calibration in seconds or minutes. Siemens says that by training a model with Bonsai, it was able to attain two-micron precision at an average of four to five iterative steps over 13 seconds, and less than one-micron precision in about 10 iterative steps.

“[Project Bonsai’s] approach bridges AI science and software to the traditional engineering world,” said Hammond. “[It enables fields] such as chemical and mechanical engineering to build smarter, more capable, and more efficient systems by augmenting their own expertise with AI capabilities.”

Microsoft Build 2020: read all our coverage here.
VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.