Data Center Demand Response

de1903_40

Data centers represent roughly 2% of US electricity usage. Because of the digitalization of the modern economy, the thirst for computer power—ultimately served by data centers—continues to grow. At the same time, electrical usage by other end-uses has remained relatively flat. Thus, data centers project to make up a larger portion of the overall electricity usage in the future. For the utilities who serve these customers with power, data centers were historically disinterested in participating in demand response opportunities. The industry is changing, however, and the recent maturation of approaches for designing a data center may lead to the large loads functioning as a more dynamic load. Within this article, we examine the potential of data center energy usage as a dynamic resource for grid operation, whether that’s as a demand response asset, an asset capable of shifting usage by time or geography, or simply recognizing the variability of load of data centers.

To fully understand the potential of demand response within data centers, a quick historical context is helpful.

As data centers were first being built, the facilities were designed with reliability at the top of mind. Keeping the servers on was the utmost priority. Maybe this meant being served by multiple substations, loads placed behind uninterruptible power supplies, and operating redundant machines with backups ready to take over if anything failed. Many observed that data centers had relatively flat load profiles; they used basically the same amount of energy all day, every day. This was partially true because of the redundant power architecture, which shielded fluctuations in server power by covering it up with losses in power conversion. This was acceptable because their data center operators were primarily concerned with ensuring the reliable operation of their critical load.

Those same basic tenets still hold true today, but there are opportunities for demand response that are more likely because both the data center industry and the utility demand response industry have matured. The purview of “demand response” has shifted from the emergency load shedding to a broader range of services, which could also include load shifting, and ancillary services. What was once a contingency may soon be an important component of how to operate a grid. With that in mind, might the data center industry be ripe with opportunities?

Let’s look at the three primary components of a data center: the IT loads themselves, the building loads associated with the facility, and the power components charged with maintaining reliable operation of the critical loads (the uninterruptible power supply and backup power generation resources).

Are you subscribed to Distributed Energy magazine? Click here for a free subscription!

IT Loads Are More Dynamic and Can Be Responsive
IT loads do not operate at maximum capacity all the time, whether that be power draw or computationally. Think about your personal computer or phone. Each is a capable computational device that often runs at a small fraction of its potential. There are varying figures on the utilization factor of IT computer power; LBNL often lists 40% (https://datacenters.lbl.gov/resources), but the wide-ranging understanding is that servers are oversized, both in an attempt to guard against rapid antiquation and to alleviate stress on the equipment. The net result is that many servers sit idle or near idle for periods of time. This could present a few opportunities since loads are not as flat as the prevailing understanding.

Power draw of idle servers is also changing. Whereas a decade ago typical IT servers drew in excess of 50% of their maximum power in idle, today’s server technology with more energy efficient processors draw idle power consumption closer to 10% of maximum power. The effect for power system operation means that IT loads change more dramatically between idle and maximum, which will have an effect on the load profiles of data centers. This is particularly true for colocation data centers which host other companies’ IT, network, and storage infrastructure. These data center operators have little influence on each customers’ actual computer power.

The idle status of IT loads represents operational opportunities as well:

  • Consolidation—Can servers be virtualized onto fewer physical machines and still achieve the desired results? Can that idle server, which is really only for contingency and high-intensity computations, be turned off completely when not needed?
  • Scheduling—Can high-intensity computations be scheduled for certain times of day, perhaps when power is cheaper?
Add Distributed Energy Weekly to your Newsletter Preferences and keep up with the latest articles on distributed power, fuel cells, HVAC options, solar, smart energy systems, and LED lighting retrofits.    
De1903 42
Credit: Stock/4X-image

IT and Geographic Redundancy
The reality of modern data center industry is that consolidation is occurring. Companies are embracing the transition to the cloud and colocation facilitates. Each of these can offer a “redundancy in depth” that alleviates the uptime concern of any single data center.

For certain big companies, end-users may not notice any single data center being shut down for a given period of time. Do you really care if you get about 25,000,000 search results in 0.35 seconds or 15,000,000 in 0.40 seconds? That’s IT redundancy at work. The redundant IT architecture allows for the ramping of IT equipment.

Taking this one step further, if IT loads can be virtualized and placed on different machines, then IT loads can be moved around geographically to lower power cost or to participate in demand response events. Geographic versatility could be useful for grid operators looking to manage transmission constraints and local energy prices.

De1903 43graphic 687
Schneider Electric
Data centers’ physical infrastructure will become more interactive with the Grid and IT
De1903 44
Credit: iStock/frankpeters

Building Loads at a Data Center
Cooling the air within a data center is a significant energy expenditure. The utilization of cool/hot aisles have improved efficiencies, but many centers still operate at a temperature most comfortable for humans. ASHRAE’s Thermal Guidelines for Data Center Environments recommends that the air temperature at the intake of the server be somewhere between 64.4 and 80.62°F with “allowable” temperatures an additional 5° lower and 9°higher. Depending on the manufacturer, there may be more specific temperature guidance.

With respect to designing a demand response program around data center thermostats, mostly gone are concerns about human comfort. They are instead replaced by a more temperature-agnostic occupant: servers. ASHRAE’s recommended band of temperatures is wider than the typical residential customer would tolerate. From a demand response perspective, broad temperature leeway is enticing, but excitement about the potential here should be tempered by studies that show that the temperature inside of a data center can change quickly. Moreover, rapid temperature change can compromise the integrity of the materials that make up IT equipment. Still, data centers, particularly those in moderate climates where outside air may not need to be conditioned, should explore their potential for HVAC demand response participation.

De1903 45
Credit: Stock/baranozdemir

The ASHRAE Guidelines referenced above also include humidity considerations. The Lawrence Berkeley National Lab published a discussion of the practical considerations associated with the relatively wide range of allowable humidity. The upshot is that practitioners must balance between cooling data centers with potentially humid outside air and re-cooling inside air with an allowable humidity. These two approaches likely have different energy (and demand) ramifications depending on the data center. Scheduling the more energy intensive activity for certain portions of the day could function as a demand response activity and maintain appropriate humidity.

Uninterruptible Power Supply
The UPS has long been seen as a key component of the data center architectural landscape. The UPS, a battery, protects against outages and other power deviations coming from the electric grid. UPS’s design specifications usually include a window in which they are expected to operate. Mostly it is designed for sub-15 minutes of operations to allow for IT equipment to safely shut down or for backup power generators to pick up the load in case power quality events make this necessary to continue operations.

As we discussed earlier with respect to IT utilization, these UPS systems are likely oversized relative to the actual load that is under their purview. A portion of that excess UPS capacity can effectively be leveraged on command to perform demand response. Just as with vehicle-to-grid concepts, the UPS demand response activity leverages batteries that are already installed, but not performing their dedicated function all of the time.

Data centers can be ideal candidates to participate in load management programs. Innovation in flexible IT load management to shift loads to other regions, along with the dispatch of existing power generation resources including power stored in battery systems (part of UPS), can deliver a mutually beneficial business outcome for both the utility companies and the data center operators. De Bug Web

More in Home