Air Flow Management for Data Centers
Data Center Cooling Airflow Strategies: Advanced vs. Conventional
A collection of techniques and strategies that direct supply air to the servers at a rate needed for server performance while minimizing airflow energy costs.
Item ID: 376
Sector:
Commercial, Industrial
Energy System:
HVAC--Other HVAC Systems
Technical Advisory Group: 2011 Energy Management TAG (#4)
Average TAG Rating: 3.1 out of 5
TAG Ranking Date: 09/29/2011
TAG Rating Commentary: - Does not meet criteria for as an emerging technology; already able.
- Re-educate IT folks as to the true temp and humidity needs of there equipment.
Technical Advisory Group: 2009 HVAC TAG (#2)
Synopsis:
The Data Center Energy Efficiency Emerging Technology Demonstration Project, performed at the Sun Microsystems, Inc., Enterprise Technology Center Laboratory in Menlo Park, California, focused on the confinement of chilled air distribution in data center cold aisles. This methodology is based on the observation that current industry practices allow the chilled supply air to mix with surrounding warmer air before it reaches the servers, wasting energy because it supplies more airflow at a lower temperature than is required.
The first objective of the experiment was to show that, by implementing appropriate air distribution controls, the mixing and recirculation of data center air can be prevented. The second objective was to show that this technique reduces the airflow and cooling load at the Computer Room Air Handler (CRAH) units. The test results confirmed that isolating the cold aisle from the rest of the data center is a practical and efficient method to achieve air distribution control. Lower server inlet temperatures were measured, demonstrating that mixing had been substantially reduced. Potential fan and cooling energy savings of about 20% have been identified.
While cold aisle separation is becoming a common practice for new data centers, there are new developments in how the separation is achieved and new opportunities for retrofitting existing data centers that were not designed with this in mind.
There are companies that offer a set of services and tools to optimize the airflow within a data denter. The tools are sensors that provide continuous temperature feedback.
This technology assesses current conditions and computational fluid dynamics (CFD) modeling results to analyze solutions to control airflow. Benefits include balanced static pressure and less equipment required to provide cool air as needed to specific loads in data center.
This solution is designed with CFD modeling software to balance the airflow for optimal results. The results are then implemented and compared against the design. The solution includes monitoring and updates as needed to keep the solution optimized with ongoing maintenance. Payback from implementing this solution is generally within 12 to 18 months. Google and Lawrence Berkeley National Laboratory have used this technology in their data centers.
Baseline Example:
Baseline Description: Underfloor supply, overhead return
Baseline Energy Use: 810 kWh per year per square foot
Comments:
Convection cooling with air is currently the predominant method of heat removal in most data centers. Air handlers force large volumes of cooled air under a raised floor (the deeper the floor, the lower the friction resistance) and up through perforated tiles in front of (or under) computer racks. Fans within the server racks or “blade cages” distribute the cool air across the electronics that radiate heat, perhaps with the help of heat sinks or heat pipes. The warmed air rises to the ceiling where it is returned to the computer room air handlers to be re-cooled.
Baseline and energy savings is based on energy use of a "typical" data center as decided as standard by E3T IT TAG team. The energy use of a full data center is 1500 kWh/sf/yr. The baseline for this technology is the HVAC portion of that, which is 54%, or 810 kWh/sf/yr. (WSU EEP, 2013)
Manufacturer's Energy Savings Claims:
"Typical" Savings: 20%
Savings Range: From 15% to 25%
Comments:
The baseline technology uses large central equipment with single or staged capacity. This ET uses walls/curtains, multiple fans, blank off panels, grommets, and other readily available products to better control air distribution, saving on fan energy. There are thermo sensors that provide graphical feedback on where the cold air needs to be directed to reduce fan energy. This ET is a design strategy, utilizing products that force the cool air only where it is needed.
Best Estimate of Energy Savings:
"Typical" Savings: 30%
Low and High Energy Savings: 10% to 50%
Energy Savings Reliability: 2 - Concept validated
Comments:
There are so many variables that energy savings estimate can only be determined on a site by site basis. Items affecting the potential savings include current maintenance and operation practices, size, loading, configuration, etc. However, we suggest using a typically reported savings of about 30% of the cooling energy.
Energy Use of Emerging Technology:
567 kWh per square foot per year
What's this?
Energy Use of an Emerging Technology is based upon the following algorithm.
Baseline Energy Use - (Baseline Energy Use * Best Estimate of Energy Savings (either Typical savings OR the high range of savings.))
Comments:
Typical reports of energy savings are on average about 30%. 70% of the baseline EUI of 126 kWh/sf/yr, results in a new EUI of about 88 kWh/sf/yr
Technical Potential:
Units: square foot
Potential number of units replaced by this technology: 4,362,704
Comments:
We have not been able to find accurate data for square footage of data centers in the Northwest. The best, most up-to-date estimate of space in the US we could find is from DataCenterDynamics (DCD, 2014, Pg. 4). According to this report, the total "white space" in the US is 109,067,617 sf. To convert to the Northwest, we use a standard of 4% of national data, based on relative population. In this case, the Northwest probably has more than its share of data centers, so we could probably justify a higher number. However, we are not likely to be serving the mega data centers over 100,000 sf., so we should reduce the number. As a close approximation, we will stick with 4%, which gives a total floor space of non-mega data centers in the Northwest of 4,362,704 sf.
Regional Technical Potential:
1.06 TWh per year
121 aMW
What's this?
Regional Technical Potential of an Emerging Technology is calculated as follows:
Baseline Energy Use * Estimate of Energy Savings (either Typical savings OR the high range of savings) * Technical Potential (potential number of units replaced by the Emerging Technology)
First Cost:
Installed first cost per: square foot
Emerging Technology Unit Cost (Equipment Only): $25.00
Emerging Technology Installation Cost (Labor, Disposal, Etc.): $0.00
Baseline Technology Unit Cost (Equipment Only): $0.00
Comments:
A cost of $2,500 for a 100 sf area was provided by 4U2 supplier.
Cost Effectiveness:
Simple payback, new construction (years): 1.1
Simple payback, retrofit (years): 1.1
What's this?
Cost Effectiveness is calculated using baseline energy use, best estimate of typical energy savings, and first cost. It does not account for factors such as impacts on O&M costs (which could be significant if product life is greatly extended) or savings of non-electric fuels such as natural gas. Actual overall cost effectiveness could be significantly different based on these other factors.
Comments:
This technology has a typical return on investment of less than 18 months.
Detailed Description:
LBNLStrategies include:
- Seal air leaks
- Eliminate obstructions to air flow
- Perforated tile location
- Cable penetrations
- Missing blanking plates
- Remove or relocate as many obstructions to airflow under floor as practical.
- Fix under-floor air leaks primarily at cable and pipe penetrations and under racks.
- Install missing blanking plates and side panels in server racks.
- Re-locate tiles to balance under floor flow.
- Fix or replace leaking or broken floor tiles.
- Determine chiller and cooling tower system capacity.
- Study airflow patterns with a visualization tool/method.
- Develop designs to optimize airflow circulation.
- Consider establishing a return air path with an overhead plenum.
- Add/connect ductwork from computer room air conditioner (CRAC) unit air intake to overhead plenum.
- Included isolation dampers in air intake ductwork.
- Confirm point-to-point connections of temperature sensors.
- Ensure airflow leakage around floor tiles is minimized.
- Verify server airflow temperature at IT equipment inlets.
- Check air temperature at server inlets and outlets for differential temperature (ΔT).
- Confirm isolation damper operation at CRAC air inlets (returns).
- Check for leaks at cable penetrations in floor.
- Review and test new control sequences for CRAC units.
- Increase data center setpoint temperature
- Optimize control coordination by installing an energy monitoring and controls system (EMCS)
- Disable or broaden control range of humidification system that can have unintended, simultaneous operations
- Install curtains
OnRak is an example of a component in this strategy and will be used to illustrate the concept of this approach to data center cooling. There are other manufacturers that offer this technology as well. TheOnRak is a compact rear-door heat exchanger designed to manage heat generated by the servers by discharging it into the aisle space. By dealing with the heat load close to the source, the OnRak is highly efficient in its use of energy and floor space.
The OnRak also:
- Drops the cold aisle temperature from 4 to 13 degrees F. The cold Isle is the space between the racks and is normally where the cold air is directed so it can be drawn thru the racks which is and inefficient cooling strategy.
- Eliminates hot spots.
- Moves about 95% of available conditioned air directly to the heat load.
The services include items covered in ETs 62, 68 and 158.
Standard Practice:
The standard practice is to cool data servers using constant volume stand air conditioners with the cooling air supplied from the ceiling. This cools the whole room instead of only removing the heat from the server racks.
Development Status:
This technology has been available for a few years (as of 2011), primarily on the East Coast, and has been used in several data centers with great success.
Non-Energy Benefits:
Potential CRAC unit downsizing for less maintenance.
End User Drawbacks:
None.
Operations and Maintenance Costs:
Baseline Cost: $0.50
per: square foot per year
Emerging Technology Cost: $0.50
per: square foot per year
Comments:
Per ASHRAE 2007 Applications Chapter 36: The
maintenance costs of the baseline equipment is about the same as for this
technology, both at about $0.50/sf/yr. Both technologies have coils
to be cleaned, refrigerant filters to inspect and replace if showing signs of
debris, etc. The equipment used
in this technology requires the same amount of an Operator's time to
'operate'.
Effective Life:
Anticipated Lifespan of Emerging Technology: 20 years
Comments:
The life of equipment used with this technology is longer. It uses equipment that has variable speed on up to four components, significantly reducing wear, thus extending the life. Per ASHRAE, the baseline technology life expectancy is 15 years, whereas the equipment in this technology (with soft-starts) is closer to 20 years.
Competing Technologies:
There are several in-rack solutions, but they all work in approximately the same way by removing the heat at the source.