Huge data centers’ surging demands are met with innovative power solutions.
By David Engle
Even in an ailing economy, the data business is booming. Traffic on the Net streams in mega-torrents now, and the global appetite for smart phones and other mobile devices is insatiable (see Additional Content: “Data Center Growth, by the Numbers”).
To handle it all, banks of servers customized for ICT (Information and Communications Technology) are on the march. Once, not long ago, they could be easily billeted in modest offices. They received basic climate amenities, like extra cooling and air filters. Now, suddenly with the vast expansion in byte-processing exploding around the world, data centers are upsizing, both in scale and raw numbers. And, they need customized infrastructure in support. In the US alone, nearly six million new servers are coming online every year. Annual growth in computational demand now exceeds 10%. It’s estimated that computing demand at 40% of data centers will likely surpass available power or cooling capacities within the next two years.†
With this expansion has come a new class of sprawling, multi-acre, industrial park-sized data centers. They’re loaded to the gunwales with servers and storage, cooling, and connectivity. And all of it needs a lot of super-reliable power.
Troy Miller is manager of business development in power quality products for S&C Electric Company, which maintains dozens of offices worldwide. He observes: “A ‘big’ data center used to be one or two megawatts. Now we’re consistently seeing them built on the order of tens of megawatts.”
Miller recently built double 100-MW substations for the Chicago Mercantile Exchange, forced to relocate its in-town data center out to a new campus down the road in Joliet, IL. (In comparison, historically, an average commercial-sized data center still consumes around 1 MW under normal loads.)
As in this Midwest example, urban electric utilities, in general, simply cannot generate or distribute sufficient power anymore, on the scale and level of reliability that data centers require. So, spacious rural tracts are being busily bulldozed, to sprout low-rise complexes for this surging server-boom population.
Besides offering cheaper and more abundant acreage, in terms of power provisioning, greenfield sites also prove to be “far more conducive to developing alternative energy—sun, wind and gas-fueled cells, or other new plants,” notes David Appelbaum. He is vice-president of marketing for data center management software provider Sentilla, in Redwood City, CA.
Data center owners are reportedly leaning heavily to this alternative energy: They like it, and they need it. Sites near mountain rivers are juiciest for new hydropower; others in the wind-swept Plains states are sprouting turbine propellers. Still others, newly located in the sunbelt, “can afford acres of solar panels and thousands of square feet of rooftop space” there, notes Appelbaum—on a scale which isn’t doable near cities. Installing onsite renewable power “gives them a better overall energy footprint to handle energy demand peaks, versus relying on the grid,” he adds.
Virtually all of the new, cutting-edge data center campuses “are heavily equipped to handle alternative energy,” finds Appelbaum.
What’s really eyebrow raising here is the industry’s willingness to shift some of its precious power resourcing sporadically off the grid (and, in a few rare cases, completely off). Given an information technology (IT) industry that’s almost obsessively conservative in temperament and notoriously risk-aversive, this is both a huge paradox and paradigm shift.
To illustrate, here are some of the big new sites willing to forge ahead with innovative generation:
- Apple recently announced that the construction of a $1-billion, 300-MW gas-fueled plant, off-grid, will power 2,200-acre data center in Sparks, NV. It will be complemented by 100 MW of wind energy and 20 MW of geothermal. Apple is the first of what will ultimately be multiple tenants at the location says K. C. Mares of the Unique Infrastructure Group, a development partner. As originally envisioned, the onsite power output would be owned by the partnership and sold to tenants. An initial load of 70-plus MW is projected to grow steadily. (Adjacent, there’s the NV Energy grid, yielding 540 MW.) A preliminary estimate of per-kilowatt-hour costs for the new power comes to about 5.5 cents—very low.
- Rugged central Washington State has attracted a cluster of data center parks. They’re drawn primarily by cheap hydroelectric energy (and tax incentives). In recent years, the 430,000-square-foot Intergate Columbia campus was constructed; it houses data centers for T-Mobile and VMware. Major IT companies like Yahoo, Ask.com, Intuit, Microsoft, and Dell have also built campuses in the region. Microsoft reportedly pays only 1.9 cents per kilowatt-hour here, compared to eight cents in Silicon Valley, CA, or Seattle, WA; and much more in New York City, NY.
- Central Oregon is also seeing a data center boom, thanks to cheap land, tax breaks, and river hydropower. Facebook built a 334,000-square-foot facility there (and no doubt “likes” it, given that a second equal-sized plant is under construction). And Facebook plans for a third, a 62,000-square-foot site. Apple, Google, and Amazon are all reportedly participating in regional hydroelectric power, and enjoying tax incentives.
- In Santa Clara, CA, a developer called Vantage Data Centers built a 17-acre campus to support three large facilities, reports Greg Ness, Vantage’s Chief Marketing Officer. One is supplied with 6-MW and 12-MW power sources; a second uses 9-MW, with room to double that. A third, a 200,000-square-foot data center, is also underway. “Build it and they will come,” is the tenancy model here: eight occupants have signed on, and space is being leased “faster than we can build it,” says Ness. Vantage is also constructing a 70-acre, 6-MW campus in Quincy, WA, to tap hydropower.
Utilities’ Provisioning Concerns
In these settings, utilities are not “competitors” with self-generators, but collaborative partners, notes Tim Crawford. He’s a board member for an industry group called Data Center Pulse. Utilities, a bit anxious about how to handle the surge from cloud computing and the like, want to ensure they’re delivering consistent power to data centers, yet without wastefulness or risk of overwhelming demand.
Mindful of the challenges, this year Pacific Gas & Electric Company (PG&E) organized a consortium in search of orderly remedies; co-participants include the New York State Energy Research and Development Agency (NYSERDA), TXU Electric Delivery, and Austin Energy. PG&E estimates that data centers are consuming up to 100 times the energy per square foot of more typical offices. PG&E also spends nearly a billion dollars every year to buy or generate power, and would love to economize. So far, no specific remedies have been announced. But the group is expected to incentivize data centers to use high-efficiency, power-saving servers, and technologies.
More typically sized data facilities have been loading up with higher-than-ever power density as well. According to Wikipedia, data center electricity bills now grab over 10% of the total cost of ownership of a facility. Ongoing expenses now typically exceed original capital investments.
Also, a lot of rarely tapped surplus power is being installed and positioned, “just in case,” as it were. Vantage’s Ness explains that each facility “needs at least 100% backup and, preferably, 150%” or more. For example, Vantage’s Santa Clara campus IT load comes to 37 MW; cumulative diesel gensets stand by with 50 MW or more. This ensures that any power outage at the utility will be instantly and seamlessly transcended.
More unusual, though, Vantage has also gone to the expense of buying, owning, and running two dual-fed, 50-MW substations, for further power surety. “This is very rare,” points out Ness, because utilities ordinarily own and run these installations. But Vantage gains for itself “very high 2N reliability . . . fed from two different parts of the grid,” meaning that if half of the utility service area fails, “we still get power from the other half,” via redundant connections from each substation.
Other benefits of substation ownership include having high, “uncut” voltage running straight to the floor, for greater efficiency and voltage reliability, and much lower per-kilowatt-hour rates.
S&C’s Miller echoes Vantage’s preference for high voltage. While designing the Mercantile Exchange’s power distribution, he persuaded managers to abandon the “normal” 480 V, and upgrade to 169 kVa. He told them, “When you do 480 volts at 100 megawatts, that’s a very, very large amount of copper that ends up getting pushed around data centers.” Shaving just a few percentage points in transmission efficiency by upsizing voltage will eventually total millions of dollars saved.
Also, high-voltage lines “rarely go down,” he notes. Incoming high voltages must still be stepped down, but this is easily done with off-the-shelf distribution equipment in the popular 15-kV and 25-kV classes, he says.
Crawford observes that data centers have a mania for power reliability, wanting “lots of excess capacity,” and splurging on power “that is ultimately not used at all.” In some cases, this might burden grid resources unduly. However, it also spurs innovation for site operators “to reduce demand” and to tweak efficiency, “without sacrificing security.”
Meanwhile, there’s some irony in this over-stocking of resources: ultimately, a single backup component may fail and thus thwart these best-laid plans.
How often do power interruptions occur?
One developer of onsite power reported, anecdotally, that a local utility (Austin Energy, in Texas) encounters outages or serious voltage incidents about once a month, or every few months. This is similar to a nearby Dell data center. “It’s frequent enough to where they don't trust the grid as their primary [power],” says Crawford.
In June 2012, a well-publicized outage at Amazon Web Services in Virginia was determined to have been caused by backup generator failure, struck down in a lighting storm. Loads at 10 Amazon data centers failed to transfer to generator backups. Uninterruptible power supplies (UPSes) ran out of battery power. Servers eventually crashed that night.
Another outage, in July, was traced to a cable fault in the utility power distribution system. Backup generators powered off due to a defective cooling fan.
Is DC the Better Alternative?
Given alternating current (AC) power vulnerabilities, what may turn out to be the ultimate power stability solution is represented by a newly arriving example, the Zurich-West Data Center (ZWDC), in Switzerland. It’s being touted as “the world’s most powerful DC [direct current] data center.”
ZWDC runs at 380 DC volts, delivering 1 MW for a 3,300-square-meter expansion. Launched in 2012, the power system was done collaboratively by two Swiss firms, ABB and Green Datacenter AG Switzerland. ABB—a pioneer of AC and DC conversion electronics—operates in about 100 countries and employs about 145,000. ABB was the first enterprise to commercialize long-distance high-voltage DC power transmission; Green is an ITC service provider.
The all-DC site runs “within very tight tolerances” at 400 Hz, notes Sammy Germany, the market development manager at Memphis, TN-based Thomas & Betts (T&B), an electrical supply firm which ABB bought this year. This DC system crafted by T&B’s European sister thus gives users “a very, very clean DC signal, similar in quality to those used on navigational gyroscopes,” says Germany.
In his job with T&B, Germany designs and builds several megawatts’ worth of power generation and substations every month, for hydro, wind, and solar energy. In a recent job, for example, he delivered an all-DC, 48-V system “for a laboratory full of computers powered with solar PV [photovoltaic] charge batteries as primary, and the grid as backup.” Automated switches link the two.
DC for mission-critical precision tasks beats AC, Germany believes, for several reasons.
First, power distribution comes out 10% more efficient than comparable AC wattage. Second, DC is less complicated and requires fewer power conversions. Third, equipment costs 15% less (at ZWDC) and takes up about 25% less floor acreage. DC also requires less maintenance. Above all, he says, “DC is cleaner and more stable” by a factor of about 10%. DC output yields “super clean power,” which is especially valued in technology, science and research fields.
One might ask then: If DC is so apt for data centers, why wasn’t it fast-tracked long ago?
A lack of cheap, mass-produced DC hardware has been one obstacle; a second is the fact that a building’s HVAC and lighting still need AC, so adding DC amounts to unnecessary duplication. Nevertheless, Germany sees “a major push on DC right now” for data centers and other sites, driven by the fact that the two renewable darlings, solar and wind yield DC. Another driver will be electric vehicles. As one measure of the recent shift to DC, the central United States already has sprouted thirteen major north-south DC transmission lines to deliver it, he notes. Eventually, economies of scale will make widespread implementation likely.
Greening IT, With Onsite Renewables
Speaking now of energy production at data centers, there’s of course tremendous social and economic incentive in play for whatever is “clean and green.” The US Department of Energy (DOE) has pushed for this. So have aggressive environmentalists. Greenpeace, for one, has vocally chastised the data center industry for its reliance on antiquated coal power—not to mention gobbling 2% or 3% of the nation’s entire electricity supply (a figure doubling every three to five years).
For their part, data center operators are reportedly eager to institute some renewables, at least on a token basis. Renewables suffer four drawbacks, though. These are well known, namely: low power density for the mission; high capital cost; grid interconnection barriers; and output that is only intermittent.
T&B’s Germany affirms that there is indeed a powerful desire for renewables—at least for one particular function, which is to recharge batteries in the UPS backup. Batteries for servers are of course ubiquitous. They’re mandatory. In this role, notes Germany, solar PV in hybrid arrays is “huge.” It supplements the grid. And Solar energy tops-up batteries as a virtually perfect power source. Solar PV feeds a charge controller, which feeds battery banks. Germany notes: “A lot of times during the day, they have the [switchgear] set up to where it will roll to primary power from the solar cells during the day—and go to grid at night for secondary.” For example, a Google data center does this; a Dell data center in Texas uses wind and solar this way; and Apple’s 5-MW solar plant in North Carolina has similar flexibility.
Not surprisingly, data center operators are typically not eagerly taking servers wholly off grid, but they’re cautiously willing to install solar panels or wind turbines to supplement power building systems like HVAC and lighting.
Several more examples of renewable power convey the flavor of what’s happening:
Emerson Network Power generates 100 kW with solar panels on its St. Louis, MO, facility; AISO.net, a virtual hosting provider in Romoland, CA, generates all energy from an acre of solar panels, plus propane-powered backup generators. Other World Computing, in Illinois, powers its data center with an onsite wind turbine when possible, and switches to the grid when the wind dies. (Of course, many data centers buy utility-generated wind and solar energy too, using renewable energy credit vouchers.)
A final example is surely the most daring off-grid excursion of all. It is both unprecedented and highly experimental. In this collaboration between NYSERDA, Clarkson University, AMD, and Hewlett-Packard, the goal is to build two connected data centers. Both will generate all power from either solar PV or wind turbines. One will be built in windy upstate New York; the other in sunny Texas (specific sites not yet known).
Now comes the most risqué part: In order to overcome the notorious intermittency of solar and wind, the two data centers intend to shift shared computing activity back and forth routinely, depending, literally, on “which way the wind blows.” At night, say, when solar is obviously offline, the wind-powered plant will take charge. Data will be shared across long-distance fiber optic networks.
A variation of this concept proposes that huge global data-intensive firms like EBay and Google could move their enormous computational loads from site to site continuously, chasing cheap after-hours utility rates. The strategy is being dubbed “following the moon.” Cumulatively, this has been calculated to save potentially millions of dollars in billings.
A Solid Oxide “Energy Server”
Final power technologies, though not renewable, may be the best energy fit of all, say proponents.
With a fuel-conversion efficiency rate reportedly at 61.5% (better than the utility or any other competing generation), and running on cheap natural gas, this power plant yields virtually no emissions. It is flexible enough to operate either off-grid, or islanded, or grid-parallel, or grid interconnected. In the two years since its introduction in 2010, the device has found working adoptions numbering in the hundreds.
Good old-fashioned fuel cells have proven themselves for many years. They are powering any number of remote, off-grid sites, like microwave towers and broadcast stations. On the negative side, though, fuel cells carry a big price tag. They often need a subsidy to be cost-justifiable. And life-cycle performance can sometimes be disappointing.
Bloom Energy, of Sunnyvale, now claims to have advanced its solid-oxide fuel cell technology dramatically, says Peter Gross, who is Bloom’s vice president of mission critical systems. This was accomplished chiefly by incorporating “new materials that withstand the stress of high temperature over long periods of time,” he says.
In strategically targeting data centers in particular, Bloom has even thought to rechristen its 200-kW electrochemical box an “energy server.” This evokes the concept of modularity, akin to that of servers chaining in data centers. Thus, rather like data servers, the Bloom boxes can be scaled and linked in power output “to enable right provisioning, because the size of the supply matches the load very well” this way, Gross says.
As for early adopters of the Bloom Energy Server: eBay’s data center at South Jordan, UT, appears to be the first major Internet business to go off grid with it, for primary power, relegating the grid to a backup role. This particular app requires 6-MW of power (about 15% of the site’s total load). The Bloom box yields only 200 kW, and hence, thirty of them are needed. Commissioning is anticipated in mid-2013.
Other diverse loads are also being powered with modest Bloom Energy Server installations at Google, FedEx, Wal-Mart, Coca-Cola, AT&T, Kaiser Permanente, Fujitsu, Verizon, First Bank of Omaha, and others, according to Gross and company announcements.
Being priced reportedly in the high six figures, the 200-kW box (“server”) may seem steep. However, Gross is quick to point out that this must be measured against the cumulative price of the equipment it replaces, i.e., the backup generators, expensive switchgear, UPS paralleling, and so on. On a per-kilowatt-hour output basis, he sums up, “It displaces higher-cost grid power with lower-cost fuel cell power. The financials are compelling enough to drive many installations.”
Onsite Energy Efficiency
Lastly, by far the most cost-effective onsite solution is one that doesn’t seek to replace the grid with self-generation; rather—as is so often true—the best choice turns out to be onsite conservation.
Responding to criticism and political pressure, the data industry has made tremendous efficiency strides, for instance, by developing its widely revered PUE (Power Utilization Effectiveness) index. However, there’s quite some distance to go yet. As Crawford observes, “Much ITC equipment still today runs at only 20% utilization. That means it’s powered, but is serving no purpose.” It’s been calculated that ICT could quite easily improve “by a factor as high as 50% to 80%,” he says, “through something as relatively simple as virtualization.”
For a recent example, PG&E reported that one of its customers made use of virtualization technology to consolidate 230 servers onto just 11 new machines.
“Even getting to 40% is not unrealistic,” continues Crawford. “Conservatively, you would double the amount of capacity or reduce the amount [of energy usage], by maximizing utilization.” Huge benefits would accrue for site owners and for strapped power grids.
A perhaps surprising obstacle to efficiency is the simple fact that data center “turf” divisions are such that electric bills are not paid by the server-floor managers whose departments burn the most watts; rather, bosses upstairs are in control. And typically, they’re not sufficiently involved in the nuances of server power usage to make optimal decisions. The line-level managers “literally do not see and don’t know what power is costing,” notes Crawford. He is citing a recent survey from the Uptime Institute, which discovered that only 20% of data center server managers have a clue as to what electricity costs, and how utility rate structures work. Simply changing this, and incentivizing staff to be frugal, “would be revolutionary,” says Crawford.
Sentilla’s Appelbaum likewise touts the value of using software-based energy management tools to gain granular insight into data center capacity, consumption and utilization, across the board. Sentilla’s software suite is one of a number being sold to help analyze energy flows and tweak efficiency.
B&T’s Germany sums up with an observation echoed by several others:
“The data center world is slow to adapt to changes,” he says. “You’re dealing with sensitive data. If a problem arises, data may be dropped or lost. This can be a career-ending decision, if something goes wrong. So, they tend to err on the side of caution.”
Author’s Bio: Writer David Engle specializes in construction-related topics.
† This, and other figures cited, are drawn from a mélange of sources, including the “DCD Global Census Growth Figures,” World Internet Usage, Greenpeace, the International Telecommunications Union, and Internet World stats.