Tuesday, December 8, 2009

Emergence of Fabric as an IT Management Enabler

Last week I attended Gartner's annual Data Center Conference in Las Vegas. Four days packed with presentations and networking (of the social kind). Lots of talk about cloud computing, IT operations, virtualization and more.

Surprisingly a number of sessions directly referenced compute Fabrics -- including "The Future of Server Platforms" (Andy Butler), "Blade Servers and Fabrics - Evolution or Revolution" (Jeff Hewitt), and "Integrated Infrastructure Strengths and Challenges" (Paquet, Dawson, Haight, Zaffros). All very substantive analyses of what fabrics _are_... but very little discussion of why they're _important_. In fact, Compute fabrics might just be the next big thing after OS virtualization.

Think of it this way: Fabric Computing is the componentization and abstraction of infrastructure (such as CPU, Memory, Network and Storage). These components can then be logically re-configured as-needed. This is very much analogous to how OS virtualization componentizes and abstracts OS and application software stacks.

However, the focus by most fabric-related vendors thus far is simply on the most fundamental level of fabric computing, which is simply virtualizing I/O and using a converged network. This is the same initial level of sophistication when the industry believed that OS visualization was only about the hypervisor. Rather, we need to take a longer view of fabric computing and think about higher-level value we create by manipulating the infrastructure similar to how we manipulate VMs. A number of heady thinkers supporting the concept of Infrastructure 2.0 are already beginning to crack some of these revolutionary issues.

Enter: Fabric as an Enabler


If we think of "fabric computing" as abstraction and orchestration of IT components, then there is a logical progression of what gets abstracted, and then, what services can be constructed via logically manipulating the pieces:

1. Virtualizing I/O and converging the transport
This is just the first step, not the destination. Virtualizing I/O means no more stateful NICs and HBAs on the server; rather, the I/O presents itself to the OS as any number of configurable devices/ports, and I/O + data flow over a single physical wire. Transport can be Ethernet, FCoE, Infiniband, or others. In this manner, the network connectivity state of the physical server can be simplified and changed nearly instantaneously.
2. Virtual networking
The next step is to define in software the converged network, its switching, and even network devices such as load balancers. The result is a "wire-once" physical network topology, but with an infinitely reconfigurable logical topology. This permits physically flatter networks. Provisioning of the network, VLANs, IP load balancing, etc. can all be simplified and accomplished via software as well.
3. Unified (or Converged) Computing
Now things get interesting: Now that we can manipulate the server's I/O state and its network connections, we can couple that with creating software-based profiles of complete server configurations -- literally defining the server, its I/O, networking, storage connections, and even what software boots on it. (Software being either a virtual host, or a traditional native OS). Having defined the entire server profile in software, we can even define the entire environment's profile.
Defining servers and environments in software allows us to provide (1) High Availability: With a hardware failure, we can simply re-provision a server configuration to another server in seconds -- whether or not that server was running a VM host, or a native OS. (2) Disaster Recovery: we can re-constitute an environment of server profiles, including all of their networking, ports, addresses, etc., even if that environment hosts VMs and native OS's.
 4. Unified Management
To achieve the ultimate in an agile IT environment, there's one remaining step: To orchestrate the management of infrastructure with the management of workloads. I think of this as an ideal Infrastructure-as-a-Service -- physical infrastructure that adapts to the needs of workloads, scaling up/out as conditions warrant, and providing workload-agnostic HA and DR.  From an IT agility perspective, we would now be able to abstract nearly all components of a modern data center, and logically combine them on-the-fly as business demands require.
Getting back to the Gartner conference, I now realize one very big missing link -- while Gartner has been promoting their Real-Time Infrastructure (RTI) model now for some time, they have yet to link it to the coming revolution that will be enabled by fabric computing.  Maybe we'll see some hint of this next year.

1 comment:

Jon Toor said...

Ken,

Excellent comments on the emergence of fabric computing. Gartner and others agree - this is the next wave in data center architecture.

In fact, the wave may be sooner rather than later. The transition you speak of -- from I/O as a fixed resource to infrastructure-as-a-service -- is actually well along.

Xsigo introduced two years ago the concept you outline in Step 3. Xsigo defines a server's I/O state with a "profile" that can be easily replicated and moved among servers. All I/O resources, including storage and network connections, QoS settings, and HA configurations, are contained within that profile.

In 2008 Xsigo announced the first stage of the Unified Management you describe in Step 4. By integrating with VMware's Virtual Center, Xsigo provided a building block... a single platform to view and manage all virtual resources (VMs and virtual I/O).

In 2009 Xsigo demonstrated (with VMware) the data-center-in-a-rack at VMworld (it ran all the demos in the VMware booth), showing 1,000 VMs consolidated to a single 60" rack and managed from a single console.

There is a lot more to go, as you point out. But enterprise users are already realizing large parts of the vision -- a saving big on op ex and cap ex -- using virtual I/O.

For a short video look at virtual I/O at VMworld, go to http://www.xsigo.com.