Biz & IT —

How Facebook puts petabytes of old cat pix on ice in the name of sustainability

Internet giant constantly pursues renewable power in search of the "negawatt."

Rows of Open Compute Project racks in a Facebook data center.
Rows of Open Compute Project racks in a Facebook data center.
Facebook

When someone says the word "sustainability," the first thing that leaps into your mind is not a data center. These giant buildings full of computer, network, and storage gear are typically power-hungry behemoths with giant cooling systems that keep servers happy and chilled. Their power distribution systems lose kilowatts just shifting electricity from one form to another. And the farms of environmentally unfriendly backup batteries and diesel backup generators on site are there to nurse things along if the power all this demands suddenly disappears.

A number of Internet giants have gone to great lengths to change that—building their own data centers and even building their own hardware in an effort to make their ever-ballooning fleet of data centers more environmentally friendly. The upside for them is lower operating costs and a better relationship with the communities they operate in. But for some of these companies, it goes beyond that. Facebook has been among the most aggressive in its efforts to green the data center and bring much of the industry along with them by open-sourcing technology developed by the company and leading initiatives to develop renewable energy sources for data centers and other operations.

All of this started about eight years ago, when Facebook began designing its very first built-to-suit data center in Prineville, Oregon (it later opened in 2011). Since then, the company has hired a director of sustainability, Bill Weihl, to direct Facebook's green efforts. "Around the time I joined Facebook," Weihl told Ars, "we started looking at where our energy comes from. We use a lot of energy and want to make sure it's as clean as possible."

Additionally, Facebook has not only taken these efforts open source, but it has moved to create a "collaborative process," Weihl said. Vendors and other companies work with Facebook to build its data center components in order "to get to a much better place collectively" with a center's environmental friendliness. In the meantime, Facebook has continued its near-decade long experiments with how it builds and powers data centers. Today, Facebook stores the billions of cat pictures and other images users share for perpetuity in a drastically different way than even a year or two ago—and it's all in the name of reducing power consumption.

Cold storage

You wouldn't think a few cat pictures could consume so much power. But by 2013, Facebook had over an Exabyte of images shared by users via their timelines in the service's "Haystack" photo store. Many of these images are rarely, if ever, viewed weeks after they've been initially shared. But Facebook's data centers have to keep them available—and backed up as well in the event of a disk failure.

That means keeping staggering amounts of storage online. But Facebook engineers developed an approach called "Cold Storage" that allowed the company to keep more than half those disks powered off at any given time, dramatically cutting power consumption. Now the Facebook storage team is looking at cutting that down even further by moving older images to Blu-ray optical disks.

Facebook has opened two Cold Storage facilities—one at its Prineville, Oregon, data center and a second at its Forest City, North Carolina, data center. Each server rack of the Cold Storage system can hold 1.92 petabytes of data. So if fully configured with these racks, the facilities could store over an Exabyte each. That capacity will only increase as newer, more storage-dense disks are brought online over time. Because of the way the storage system has been designed, they use less than a sixth of the power used by Facebook's traditional data centers.

In part, that's because these facilities don’t handle "production data" as Facebook defines it. Instead, they're purely used for backup, like some sort of vast disk-based tape library. As a result, the Cold Storage data centers don't have backup power supplies or diesel generators. And there's also been a bit of hardware and software engineering done to ensure the storage servers run as leanly as possible.

The storage servers for Cold Storage are based on the same architecture Ars saw in a visit to Facebook's engineering lab in 2013—the Open Vault storage system that has been published as open source through the Open Compute Project. The Facebook Infrastructure team made some minor tweaks to the design since then, including tinkering that was more mechanical than technical. In a blog post published in May, Facebook's Krish Bandaru and Kestutus Patiejunas described the root of one of those changes:

"One of our test production runs hit a complete standstill when we realized that the data center personnel simply could not move the racks. Since these racks were a modification of the OpenVault system, we used the same rack castors that allowed us to easily roll the racks into place. But the inclusion of 480 4 TB drives drove the weight to over 1,100 kg, effectively crushing the rubber wheels."

Another modification made to the Open Vault design was required to prevent the servers from drawing too much current from the Cold Storage facility's power distribution system. Only one drive in each tray of a Cold Storage rack can be powered up at any given time—a limit controlled by the firmware in each drawer. Since a vast majority of the drives are powered down all the time, less power distribution and cooling are required. There's only one "power shelf" per rack, along with sixteen drive shelves connected to two "head node" servers.

On top of all that, the power savings go beyond the facilities themselves. Even the "live" copies of many images can reside on drives that can be turned off most of the time.

"We have a system that allows 80-90 percent of disks turned off," Weihl said. "When a disk is not touched, it gets turned off. It's a matter of having storage layer that knows how to have most of the disks off most of the time, but the data is on a disk that's spinning that you can get to. If you integrate more tightly into the application layer on top of it, you can predict when photos are going to be needed—like when a user is scrolling through photos chronologically, you can see you'll need to load up images soon. You could make a decision to turn off not just the backups, but all the copies of older stuff, and keep only the primaries of recent stuff spinning."

Wringing out efficiency

When it comes to measuring data center efficiency, the unit that is generally thrown around is called Power Usage Effectiveness (PUE). This is a ratio of how much of a data center's power is actually used by the servers, switches, routers, and disks, and the total amount of power delivered to the data center. The lower the PUE number, the better.

Ten years ago, data centers that had a PUE of 2.5 were common, meaning that more than half the power they brought in from the wire was either used for things other than computing power (like cooling and lighting) or lost during conversions from AC to DC and changes in voltage. Today, data centers in general have improved somewhat. A 2014 survey by the Uptime Institute found that the average self-reported PUE for large data centers had dropped to 1.7. But Facebook's data centers have gone to great lengths to increase efficiency: the average PUE for Facebook's Forest City data center over the last year was 1.08, meaning that a mere 8 kilowatts out of every 100 were used for something other than powering the racks. Facebook's infrastructure team once referred to this as the "negawatt"—the power they never had to consume.

Part of that efficiency comes from all the work Facebook has done to change the computing architecture of its data centers, and the Cold Storage effort is only one example of those efforts. At the core of Facebook's broader initiative to increase power efficiency of the computing architecture is the way the company's data centers distribute power.

The more traditional approach to data center power in the US looks something like this (though the following is greatly simplified):

  • Electricity comes in from the utility provider to a main power panel as 480 volts ("city power") alternating current. The power is routed through an automatic switch that ties to an emergency generator, so it can switch the whole power load to the generator in the event of a failure and push it through an uninterruptible power supply distribution switchboard.
  • From the UPS, the power is finally passed to a Power Distribution Unit (PDU) to be sent out to the data center's racks.
  • The PDU distributes power at 208 Volts AC current to distribution buses in the racks and then to the power supply units for the servers.
  • The power supply converts the power, usually to 12 volts direct current, to the server's board.

Each of these steps causes a loss of power to some conversion. In older data centers, that could account for the loss of as much as 40 percent of the electricity brought in from the utility lines. That has improved dramatically over the past decade, with most data centers getting about 85 percent efficiency out of their distribution systems. But that's still a 15 percent loss of electricity, and, with loads like those that Facebook carries, that loss can be substantial.

To reduce that loss, Facebook has gotten rid of some of those conversions and sends power to the racks at a full 480 volts. There is no centralized UPS—the batteries are part of the racks themselves, so UPS capacity is scaled up directly with the addition of more computing power. The power bus of each rack performs conversions to DC or AC voltages as required by the equipment in the rack. As a result, much less power is lost being distributed to the racks.

Other innovations in Facebook's data centers also reduce power consumption—including smart lighting systems that light the spaces only where humans are and a reliance on natural light in office spaces where possible. But perhaps the biggest single change Facebook has made to increase efficiency is the way its data centers are cooled.

Channel Ars Technica