Cloud Infrastructure Options: How to Choose

When considering infrastructure options, mid-sized to large enterprises must carefully evaluate cost implications of hyperconverged, traditional, and distributed cloud.

Nathan Eddy, Freelance Writer

June 4, 2023

4 Min Read
question mark on a cloud cutout
christianchan via Adobe Stock

When choosing a cloud infrastructure direction, it’s important to weigh the advantages of three types, which include traditional, hyperconverged infrastructure (HCI), and distributed cloud architectures.

Traditional or three-tier infrastructure refers to the combination of disaggregated servers, storage arrays, and networking infrastructure.

Hyperconvergence provides a building block software-defined approach to compute, network, and storage on standard server hardware under unified management.

The third option, distributed cloud, refers to the distribution of public cloud services to different physical locations.

Location, Location, Location

Pavel Despot, senior product manager at Akamai, explains the main differences between hyperconverged, traditional, and distributed cloud infrastructures come down to location.

“A traditional cloud infrastructure, which contains the delivery of computing services, like servers, databases, and networking over the internet, is bound to a chosen location or locations,” he says. “Hyperconverged cloud infrastructures keep hardware components in a single integrated cluster.”

On the other hand, he notes a distributed cloud infrastructure doesn’t take the approach that workloads are built for specific locations; instead, workloads and applications can be deployed to multiple geographical endpoints.

“Distributed clouds solve common pain points of the traditional cloud, such as high costs, latency and limited global reach,” he says.

While all three operate on the idea that pools of resources can be drawn upon as needed, the nature and breadth of those pools are different.

Hyperconverged solutions use commonly available hypervisors to allocate resources available for various compute, storage, and networking functions.

“As a result, you’re limited by how much hardware you have in that location,” Despot says. “So, management requires you to keep an eye on how much capacity you’re using and plan ahead.”

He notes it’s important to remember that HCI solutions generate significant overhead costs, further eating into how much you have available for your workload.

Flexibility and Scalability

Cory Peters, vice president of cloud services at SHI International, explains hyperconverged, traditional, and distributed cloud differ in terms of scalability and flexibility.

“Hyperconverged infrastructure offers seamless scalability and flexibility through its integrated approach and software-defined resource allocation,” he says. “Traditional infrastructure presents limitations in scalability and flexibility due to its fragmented nature and manual configuration processes.”

Distributed cloud infrastructure provides scalability and flexibility benefits, particularly in edge computing scenarios, by distributing resources closer to end-users and enabling dynamic resource allocation.

One industry example of this could be an autonomous vehicle company employing a distributed cloud infrastructure to support its fleet,” Peters explains.

Edge computing capabilities lets vehicles process sensor data on-board and make instantaneous decisions.

This process ensures safety and responsiveness without relying on a centralized cloud infrastructure. 

“Understanding these differences is essential for organizations to make informed decisions about which infrastructure model aligns best with their scalability and flexibility requirements,” Peters says. “By selecting the right model, businesses can ensure they have the necessary agility and adaptability to meet evolving demands and drive innovation.”

Considering Cost, Management Factors

Swaminathan Chandrasekaran, principal and global cloud CoE lead at KMPG, cautions distributed cloud infrastructure can raise costs if not properly managed.

“You need to consider data transfer costs for network ingress and egress between clouds as well as properly utilizing commitment discounts for workload placement on provider contracts,” he says.

 The biggest difference from a cost perspective between traditional infrastructure in your own data center and moving to public cloud is shifting from a CapEx model of owning your own infrastructure assets to an OpEx model where you pay for what you use.

“You can further optimize costs in an OpEx model with burst-capacity scenarios that have high resource demands in short or infrequent intervals,” Chandrasekaran adds.

He says with traditional infrastructure, organizations must plan for, procure, deploy, and provision hardware for each new use case or increase in capacity demand from the business.

“This can generally take weeks to months for an environment that can be delivered before it can even be made fit-for-purpose for an application for the business,” he explains. “Applications and systems are at greater risk of impact from hardware failure and could see longer mean time to recovery in such situations.”

 He points out HCI and distributed cloud infrastructures allow for on-demand provisioning, greatly reducing the time to market for new solutions to power the business.

“By centralizing these virtual resources behind a single control plane, you also gain efficiencies in managing and maintaining these IT resources,” he says. “With built-in levels of resiliency and greater portability of virtual environments, mean time to recovery at times of failure are greatly reduced.”

Impact on Agility, Speed

Peters says the choice of infrastructure type has a significant impact on the agility and speed of IT applications within an organization.

“Hyperconverged infrastructure stands out in terms of agility and speed, thanks to its integrated architecture and software-defined resource allocation,” he says.

Traditional infrastructure presents challenges in both agility and speed due to its fragmented nature and manual configuration processes.

Distributed cloud infrastructure excels in agility and speed, especially in edge computing scenarios, by bringing resources closer to end-users and reducing network latency.

He says understanding the impact of different infrastructure types on the agility and speed of IT applications helps organizations make informed decisions that align with their application requirements and business objectives.

“By choosing the right infrastructure model, businesses can optimize the agility and speed of their IT applications, leading to improved productivity, customer satisfaction, and competitive advantage,” Peters says.

What to Read Next:

10 Must-Have Enterprise Cloud Skills

How to Trim Your Cloud Budget

8 Secrets for Smoother Sailing During a Cloud Migration

About the Author(s)

Nathan Eddy

Freelance Writer

Nathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights