Shail Khiyara
Contributor

Generative AI in enterprises: LLM orchestration holds the key to success

Opinion
Dec 06, 202310 mins
Artificial IntelligenceGenerative AI

In the dynamic landscape of AI, LLMs represent a pivotal breakthrough. Unlike traditional AI, which demands frequent data updates, LLMs possess the ability to learn and adapt in real-time. This mirrors human learning and positions LLMs as essential for crafting more resilient and efficient AI systems.

Tablet, teamwork and meeting of business people in office discussion. Collaboration, technology and men or employees with touchscreen planning sales, marketing or advertising strategy in company.
Credit: PeopleImages.com - Yuri A / Shutterstock

This article was co-authored by Shail Khiyara, Founder, VOCAL COUNCIL, and Rodrigo Madanes, EY Global Innovation AI Leader. The views reflected in this article are the views of the authors and do not necessarily reflect the views of the global EY organization or its member firms.

Many enterprises are accelerating their artificial intelligence (AI) plans, and in particular moving quickly to stand up a full generative AI (GenAI) organization, tech stacks, projects, and governance. Most are focusing on choosing the right foundation model to use, leaving the choice of large language models (LLM) orchestration as an afterthought.

We think this is a mistake, as the success of GenAI projects will depend in large part on smart choices around this layer. Following a close look at the challenges with automation orchestration in my earlier article, here we will highlight challenges and strategies for GenAI, focusing on what many are calling the orchestration layer.

In this article, let’s dive into why the LLM orchestration layer is important, present challenges around setting one up in enterprises, and next steps that CIOs and IT directors should be taking. For readers short on time, you can skip to the section titled Strategies for effective LLM orchestration.

Mastering the complexity of LLM orchestration

Those of us who have been involved in automation have learned that orchestration is key as bots grow in number. While automation orchestrators have improved and some have moved to become cloud-based, there are still challenges facing them, and this has limited orchestrators to basic operational bot metrics.

With LLMs, this orchestration complexity is intensified and there is a need to manage this complexity in a coherent way. Think of LLM orchestration as a behind-the-scenes planner like an aircraft dispatcher. In the latter’s case, your safety is directly tied to the dispatcher’s ability to handle myriad tasks: plan routes, check weather, communicate accurately and clearly, and coordinate with various external entities. Likewise, LLM orchestration plans how your app talks to big language models and keeps the conversation on track. In the end, if done skillfully, all needed information is shared correctly and operations run smoothly.

LLM orchestration: the backbone of enterprise AI integration and continuous learning

LLM orchestration provides a structured method for overseeing and synchronizing the functions of LLMs, aiming for their smooth integration into a more expansive AI network. This orchestration layer acts as a bridge, effortlessly merging various AI elements, streamlining operations and encouraging an environment of ongoing learning and enhancement.

This orchestration layer amplifies the capabilities of the foundation model by incorporating it into the enterprise infrastructure and adding value. Key roles of the orchestration layers include:

  • Acting as an integration layer between LLMs, the enterprise data assets, and applications
  • Retaining memory during user’s conversational sessions because foundation models can be stateless
  • Linking multiple LLMs in a chain for more complex operations
  • Functioning as a user’s proxy, devising intricate strategies for executing complicated tasks

Some examples of components of this layer are plug-ins, which can retrieve real-time information and retrieve data from enterprise assets — both of these are essential for company information systems. Other typical components required for an enterprise system are access control (so that each user only sees what they are entitled to) and security. We are beginning to see commercial products for LLM Orchestration, as well as commonly used open-source frameworks such as LangChain and LlamaIndex.

Coordinating numerous intricate language models might appear daunting, but when approached correctly, it can become a game-changing asset for those aiming to enhance their GenAI abilities. Effective management of LLMs is crucial to fully harness the potential of these potent tools and smoothly incorporate them into your operations.

Amol Rajmane, Dupont

Unlocking LLM orchestration: navigating enterprise challenges

While orchestration offers significant promise for enhancing AI capabilities, it comes with its own set of complex challenges that require thoughtful planning and strategy. These challenges include:

  • Data security and privacy: The critical issue of safeguarding data as it moves and interacts within the orchestrated system cannot be overstated.
  • Scalability: As a business expands, the orchestration framework must be designed to scale, accommodating a growing array of LLMs and data flows.
  • Complexity: Managing a diverse set of LLMs, each with unique operational needs and learning models, presents a considerable challenge.

LLMs form the backbone of intelligent automation by enabling systems to learn, adapt, and evolve autonomously. Unlike traditional models, LLMs thrive on continuous learning from real-time data, enhancing the agility and responsiveness of automated systems. By minimizing the need for manual re-training and tuning, LLMs contribute to reducing operational overheads and accelerating decision-making processes. Over time, the performance of systems orchestrated with LLMs is poised to improve as they learn from new data and experiences, making intelligent automation progressively more effective.

At present, the market for commercial orchestration products is still maturing. IT departments are left with the choice of either adopting these emerging solutions or assembling their own orchestration systems from various components.

Another obstacle is the limited pool of experts in this emerging field. The rapid evolution of the domain means that there are few true specialists, making it difficult for enterprises to identify the right talent for their needs. This is akin to the challenge of choosing a skilled doctor when one lacks medical expertise.

Lastly, the orchestration layer intersects with other key areas of enterprise architecture, such as intelligent automation, integration software, and application programming interface (API) switchboards. This necessitates careful planning to delineate responsibilities for task allocation within the organization.

Integration glue: the LLM orchestration layer

To fully unlock the capabilities of LLMs, a well-designed orchestration framework is essential. This framework, often referred to as the integration glue, acts as the central hub that cohesively blends different AI technologies, ensuring they function synergistically within a larger AI network.

Implementing such a framework requires a seamless connection between user-facing applications like GenAI and back-end systems such as enterprise resource planning (ERP) databases. IT departments must tread carefully to avoid falling into the trap of accumulating outdated or redundant automation code.

In today’s automation landscape, actions are typically event-driven. For instance, consider a conversational AI interface similar to ChatGPT. Users might want to query their ERP system to check the status of their open purchase orders. In such cases, the orchestration layer has multiple responsibilities, including to:

  • Determine that the query requires data from the ERP system
  • Formulate the appropriate query to the enterprise system, using a back-end development standard such as SQL, API, GraphQL, or REST
  • Authenticate the user’s identity to ensure data privacy
  • Interact with the enterprise system to fetch the required data
  • Return the data to the user in a conversational format

LLM orchestration is not just about technology alignment; it’s about strategic foresight. Virgin Pulse is setting the stage for the future by crafting an LLM orchestration strategy that harmonizes low code development and RPA. This isn’t just automation; it’s a finely tuned approach that enhances our digital solutions with the invaluable element of human judgment.  

Carlos Cardona, Virgin Pulse

The strength of the orchestration layer lies in its ability to leverage existing, mature frameworks rather than building all functionalities from scratch. This approach ensures a robust architecture that safeguards data privacy, allows for seamless system integration, and offers various connectivity options, making the system both maintainable and scalable.

Strategies for effective LLM orchestration

Having explored the imperative and challenges of weaving LLM orchestration into your GenAI stack, we now unfold some strategies to steer through these challenges for IT departments.

 Vendor and tool selection

One of the pivotal decisions in establishing an effective LLM orchestration layer is the selection of appropriate vendors and tools. This choice is not merely a matter of features and functionalities but should be aligned with the broader AI and automation strategy of the enterprise. Here are some key considerations:

a) Does the vendor choice align with enterprise goals?

b) Does the vendor offer a high degree of customization to adapt to your enterprise needs?

c) Security and compliance features such as end-to-end encryption, robust access controls, and audit trails are a must.

d) How well does the tool integrate with your tech stack? Compatibility issues can lead to operational inefficiencies and increased overheads in the long run.

Architecture development

The primary objective of architectural development in the context of LLM orchestration is to create a scalable, secure, and efficient infrastructure that can seamlessly integrate LLMs into the broader enterprise ecosystem.

While there are several components to this, a few key ones include – data integration capabilities, security layer, monitoring and analytics dashboard, scalability mechanisms, centralized governance, and more.

Scalability and flexibility in LLM orchestration

In a robust LLM orchestration layer, scalability and flexibility are critical. Key functionalities include dynamic resource allocation for task-specific computational needs and version control for seamless LLM updates. Real-time monitoring and state management adapt to user demands, while data partitioning and API rate limiting optimize resource use. Query optimization ensures efficient routing, making the system both scalable and flexible to evolving needs.

Talent acquisition

It’s crucial to onboard or develop talent with the skill set to envision and manage this orchestration layer. Ideal candidates are a mix of LLM scientists who comprehend LLM workings, and developers adept at coding with APIs against an LLM, akin to the distinction between front-end and back-end developers.

The imperative of action and the promise of transformation

As we stand on the cusp of a new frontier in AI and enterprise operations, the role of LLM orchestration is not just pivotal — it’s revolutionary. It is no longer a question of ‘if’ but ‘when’ and ‘how’ organizations will integrate these advanced orchestration layers into their AI strategies. Those who act decisively are poised to unlock unprecedented efficiency, innovation, and competitive advantage.

In this rapidly evolving landscape, LLM orchestration will transition from being a technical requirement to a strategic cornerstone — shaping not just enterprises but industries and economies. Engaging proactively with LLM orchestration is not just a prudent venture; it’s a transformational imperative.

Shail Khiyara
Contributor

Based in Silicon Valley, Shail Khiyara is a distinguished thought leader and operational executive in the Intelligent Automation and AI sector. As a recognized Top Voice in AI, Shail has furthered his influence with the 2023 launch of his book titled “Intelligent Automation - Bridging the Gap between Business & Academia”, offering a unique blend of strategic insight and practical execution experience. His next book focuses on personal identity and data. creating digital twins in the world of AI. Shail is also the founder of VOCAL, a global initiative that unites over 90 leading brands worldwide, fostering collaboration among automation and AI leaders. This initiative demonstrates his commitment to driving innovation and thought leadership in the industry, marking him as a pivotal figure in shaping the future of AI and automation. He holds an MS in Engineering and an MBA from Yale University.

More from this author