Data centers can learn sustainability from the cloud

Most of the proposed solutions for green data centers need improvement. It’s time to be more invasive and take a few lessons from cloud multitenancy

Data centers can learn sustainability from the cloud
Thinkstock

Everyone wants a green data center, also known as a sustainable data center. There are both policy and technology pushes in this emerging space, such as countries that are encouraging voluntary reduction of a data center’s carbon footprint. For instance, Finland will be announcing a lower energy tax for data centers that show a certain degree of greenness. The big advantage of having data centers in colder climates is that not as much power is needed for cooling. If it’s five degrees outside, just open a window.

Today, there is a small industry around building and running green data centers. This includes intelligent power management, with some systems driven by AI engines. Some unique solutions involve using the heat generated by data centers to provide heat to surrounding communities or submerging data center “pods” in water to cool them. 

However, we may be on the wrong paths. Green technology is table stakes these days for any modern data center that needs to drive cost optimization. Even with new tech that makes power, cooling, and water use more efficient, we’re not getting to the heart of the problem. 

What’s that, you may ask?

At the root of data center overconsumption of power is the use of too much data center space in the first place. Look closer, and you’ll see applications and data stores that do not operate in energy-efficient ways, and require too much processor, memory, network, data storage, and other core services. In other words, they are engineered in inefficient ways that require much more resources than they should use in the first place. 

Of course, nobody is suggesting that we force developers to dig in their code and databases to second guess design decisions, but there are some other things we can do:

First, drive energy efficiency as part of devops and devops toolchains. Part of continuous testing should include testing for energy efficiency, much like security, performance, and stability. Code should get bounced back if it does not efficiently use resources.

I’ve seen applications that require a petabyte of data storage and 300 physical servers be reduced to 10 servers and 500GB of storage with a few simple design changes that optimized efficiency. There are always ways to make applications and data storage more resource efficient. 

Second, and most exciting, is the ability to leverage tenant management approaches that can be found in public clouds today. Tenant management can also reduce resource consumption and thus power usage by monitoring and optimizing the application’s use of resources. 

The idea is to take advantage of a purpose-built layer of technology between the application and the resource it needs. While analogous to virtualization and containers, it’s more like a tenant management system found in public clouds, with resource optimization in mind.

Innate functions would include serialization of I/O to reduce the size and activity of storage through technology, such as deduplication of data in flight. Moreover, use the same approach for memory optimization, which removes the need to take huge hunks of memory at once for the applications. Then there’s processor optimization: using lower-powered processors for applications written for traditional power-hungry processors. The list can go on.

This can net us a theoretical 40 percent to 60 percent reduction in power consumption for most data centers. The biggest bonus? Core applications and data storage don’t need to be changed, and performance impact is minimized as well.

Will we see widespread adoption of green solutions inspired by multitenant management? I’m not sure. I believe we need to solve this problem at its source. Optimizing data center infrastructure is good, but it won’t get us where we need to be.

Copyright © 2019 IDG Communications, Inc.