Direct current in the data center: are we there yet?

Data centers are under increasing pressure to reduce the environmental impact of their operations. Could DC be a viable option?

In a typical data center, around half of the power supplied to the facility is lost in power conversion and distribution or used to manage the heat released by these losses and the IT equipment itself. As rack power densities increase, so does the cooling challenge. Data centers are also under increasing pressure to reduce the environmental impact of their operations. The convergence of external pressure and business mandates has pushed the industry to explore new technologies and new design principles.

DC is the new black (again)

For years the industry has flirted with the notion of moving to direct current for power distribution within the data center. The logic is straightforward: every time power is converted from AC to DC and back again (e.g., to place a UPS with battery backup on the distribution system), energy is lost, mostly in the form of heat. The fewer conversions the power supply undergoes, the lower the losses and the less heat generated. Greater efficiency leads to lower costs, both in capital and O&M.

DC offers other benefits, too, which we’ll cover in a moment, but it’s instructive to understand a little of its history first. The War of Currents pitted Thomas Edison’s DC system against the AC design of Nicola Tesla and his business partner George Westinghouse to become the standard on which the US power grid would be built. AC won the day, largely because at the time it was easier to transmit power over longer distances using AC at higher voltage.

DC would make a comeback in the mid-1950s with the advent of high voltage direct current (HVDC) power transmission, which ironically was best suited to shipping large amounts of power over long distances. Today, HVDC systems do exactly that.

They are also used to link asynchronous AC grids, enabling neighboring power systems to send power back and forth in a controlled manner. The Cross-Sound Cable linking New York with the New England grid famously enabled Long Island to recover more quickly after the 2003 Northeast Blackout thanks to the DC system’s ability to control the direction and rate of flow of electric current. New York “imported” 330 MW of power from the New England grid.

HVDC transmission shares many of the same benefits of DC power distribution at smaller scale: it has a smaller footprint than a comparable AC system, operates with lower losses, and offers several reliability benefits, too.

Let’s look at DC’s value proposition in the context of a data center.

The data center use case

DC power distribution uses less copper than a comparable AC system (ABB has observed up to 40 percent less in marine applications), and it does not require the use of rectifiers and transformers, which translates to lower installed cost. Operating efficiency is also better than AC due to the lower losses and reduced cooling load as noted earlier. How much better? Estimates vary but a Lawrence Berkeley Lab study in 2006 using contemporary AC equipment showed DC had 5 percent to 7 percent greater efficiency.[1] AC equipment has evolved, of course, but herein lies another challenge. Even the most efficient power supplies are still considerably less efficient than market-leading UPS, so it’s important for data center owners/operators to take a holistic view of their facility’s power consumption.

DC power distribution takes up less space than AC, which means more floor space for server racks and/or cooling equipment—important, as many modern data centers are constrained by their ability to cool the equipment they already have. Finally, it’s well documented that a DC system makes it easy to integrate on-site energy sources like solar or fuel cells—or energy storage devices—that produce DC power. Even if these options only cover a small portion of the facility’s total load, they may become increasingly appealing as data centers seek to green their operations.

In addition to the efficiency and cost argument, DC also offers benefits in terms of power quality and system reliability. The design of a DC power system is simpler, with fewer components (and thus fewer points of failure) than the AC alternative, and it eliminates harmonics, phase load balancing and other issues associated with AC. The telecom industry has used 48V DC systems for decades with tremendous results. Japan’s NTT, for example, reported a 10X improvement in reliability using DC compared to an AC system using a single UPS per path, a common configuration.[2]

Energy storage devices can be placed directly on the DC bus, and loads can be added as needed without having to re-engineer the power network. This adds up to faster installation, and faster upgrades as the facility grows.

Finally, there is safety. Modern power electronics allow us to limit fault current through design in DC systems, and that is key to reducing risk to personnel and equipment.

A few hurdles

There are, of course, some obstacles standing in the way of wider adoption of DC power distribution in data centers. First, there is a limited selection of DC power supplies for servers, and a lack of air conditioning units, fire protection gear and building controls that run on DC, which will be needed for DC to be a viable choice.

There is also a lack of standards, for example for arc flash and grounding, so each system must be engineered individually, and that adds substantially to the cost of choosing DC. IEC is currently working on a plug and socket standard under TS 62735—more work in this vein is needed to make DC a realistic option for data centers.

Perhaps the greatest challenge, though, is simply the lack of experience among data center owners, operators and contractors. There are design questions like where to put the AC-to-DC conversion, and how to engineer an energy storage device on the DC bus that delivers the same performance as a UPS. Clearly, there will have to be an education process to increase familiarity with DC systems and lower resistance to change.

One approach might be for data centers to try building a case for DC at the server level where most of the savings are anyway. Doing away with inefficient power supplies and moving to equipment that can receive DC power would score an early win for efficiency and build data center staff confidence in working with DC systems.

Today the global installed base of DC data centers hovers around 10MW, a tiny fraction of the industry to be sure. Still, the business case for DC remains compelling. If even one hyperscale operator were to make a substantial investment in DC, it could create demand overnight that would then drive the development of the equipment and standards needed to take DC distribution to the next level.

 

Endnotes

[1] “DC Power for Improved Data Center Efficiency,” My Ton, Brian Fortenbery, William Tshudi. March 2008. https://datacenters.lbl.gov/sites/default/files/DC%20Power%20Demo_2008.pdf

[2] “Efficiency and Reliability Analyses of AC and 380V DC Distribution in Data Centers,” Bijen R. Shrestha, Ujjwol Tamrakar, Timothy M. Hansen, Bishnu P. Bhattarai, Sean James, and Reinaldo Tonkoski. IEEE Access via https://www.osti.gov/servlets/purl/1482212

Categories and Tags
About the author

Dave Sterlace

Dave Sterlace is the Head of Technology for Data Center Solutions at ABB, and has 20+ years of experience in data center power, automation and critical power.
Comment on this article