Biz & IT —

Datacenter energy costs outpacing hardware prices

It's estimated that the power a server burns over its lifetime will soon cost …

Last week's EmTech meeting played host to a panel that focused on managing energy use in datacenters, featuring representatives from Lawrence Berkeley Labs and the Uptime Institute, along with people from Google, Intel, and Yahoo. Almost everyone but Yahoo's Scott Noteboom discussed where power efficiencies were improving, and identified areas that still needed work, so their points are best organized by topic, rather than speaker. Noteboom described how Yahoo built an extremely energy-efficient data center in upstate New York, which we'll get back to in a bit.

Nearly all of the speakers recognized that, from a market perspective, building an efficient datacenter is increasingly critical. Jonathan Koomey, who's affiliated with LBL and Stanford, said that power use by US datacenters doubled between 2000 and 2005, despite the fact that the period saw the dot com bust. Uptime's Kenneth Brill told the audience that, currently, the four-year cost of a server's electricity is typically the same as the cost for the server itself, while John Haas of Intel said that 2010 is likely to be the point where the electricity costs of a server over its lifetime will pass the price of the hardware.

The price for power also tends to get magnified at the datacenter level. Haas said that, based on his company's estimates, one watt saved at the server can save 2.84W at the datacenter level, while Google's Chris Malone claimed that cooling dominates the additional costs, running at about twice the cost of everything else combined. When the additional infrastructure costs are considered, Brill said, the minimum capital expenditure for a $1,500 server has now cleared $8,000 when the power and air conditioning infrastructure is considered. "Profits will deteriorate dramatically if datacenter costs don't get contained," he concluded.

Fortunately, the panel described potential solutions that exist at nearly every level of the datacenter. More efficient processors are a key driver; the LBL's Koomey said that processing efficiency, measured as computations per kilowatt-hour, has been doubling about every 1.6 years, going back to the vacuum tube era of the 1940s. Plotted on a logarithmic scale, it made for a remarkably linear trend line. Intel's Haas told Ars that it's not only the computational efficiencies that are driving this trend—current Intel processors devote a million gates per socket towards energy management and, when idle, only consume about 30 percent of what they do under heavy loads. As a result, the company estimates that replacing 185 single-core servers from 2005 could be done with 21 modern Xeons, yielding energy savings of nearly 90 percent.

The increasing efficiency of the processor is starting to drive companies to look elsewhere for further gains. So, for example, Malone described how Google has ditched its UPS units, and instead placed a small battery on the motherboard side of the power converters. Charging it via the DC lines allows a UPS functionality with what he called "nearly perfect efficiencies."

But the biggest remaining efficiencies are likely to be in management and facilities. Nearly everyone agreed that the worst possible efficiency came when hardware is sitting idle, since even the most efficient hardware draws a significant amount of power even when not doing anything. Both Brill and Koomey said that the companies that do cloud computing are far, far better at avoiding this than business or scientific users, and the lessons they've learned really need to be adopted elsewhere. Brill also pointed out that although Intel provides power management capabilities at the processor level, these often have to be activated via software, and a lot of companies don't bother.

At the facilities level, since cooling dominates, controlling its use is the clearest path to greater efficiency. Several of the speakers said that although a lot of the hardware has recommended operating temperatures, these are often based on out-of-date information—the hardware can actually tolerate temperatures that are quite a bit higher. In addition to raising the temperature, Google's Malone said that controlling the airflow and using evaporative cooling can avoid the use of chillers entirely, yielding significant power savings. Haas pointed out that a group he works with, called The Green Grid, has an interactive map that will display how often the outdoor air temperature in a given location is below a set level, indicating that cooling can be avoided entirely.

With all these options available, the biggest problem is generally institutional inertia. As Koomey put it, there are split incentives: the cheaper hardware may not be efficient, which produces what he termed "perverse behavior." Haas pointed out that a company's energy buyers and IT managers may not report to the same executives, leaving them with little incentive to cooperate to lower a different division's costs.

All of that nicely set the stage for Yahoo's Noteboom, who described what happens when a company does organize a data center for energy efficiency. As recently as 2005, Yahoo was entirely dependent on colocalization facilities that hadn't been built with efficiencies in mind. As a result, he estimated that 60 percent of the power was wasted, and the facilities went through enormous amounts of water. Since then, Yahoo has built five new facilities, each incorporating new ideas. Although its server footprint has grown by a factor of 12 since 2005, datacenter costs are only one-quarter of what they used to be.

He then described Yahoo's latest facility, being built in upstate New York. The buildings are oriented according to the prevailing winds that come off the Great Lakes, with vents along the walls and a high central cupola that allows waste heat to escape. "The entire building is an air handler," he said, noting that servers are also laid out within it in order to maximize their fans' impact. As a result, the current estimate is that it will only require external cooling for 212 hours in an average year, which will be provided via evaporative cooling. Yahoo estimates that the cost to cool it will only be about a single percent of what they were paying during the colocation days.

The company is looking into the potential for further improvements, like shutting off server fans in favor of larger, more efficient external ones and eliminating UPS systems. They're also exploring cross-facilities load management—sending work to facilities where power and cooling costs are lower.

Overall, the Yahoo presentation gave a good sense of what's possible when a company focuses its attention on datacenter energy use and does a thorough adoption of best practices. The cost figures provided by the rest of the panel suggested that,if current trajectories continue, it will be harder to justify not making this sort of effort.

Further reading

Dr. Koomey wrote Ars to point out that some of his research is hosted on an Intel site dedicated to energy efficiency.

Channel Ars Technica