WSU Energy Program Logo
Bonneville Power Administration Logo
  • Home
  • About
  • Database
      • Browse
      • Energy Systems
        • Building Envelope
        • Electronics
        • HVAC
        • Irrigation
        • Lighting
        • Motors & Drives
        • Multiple Energy Systems
        • Power Systems
        • Process Loads & Appliances
        • Refrigeration
        • Transportation
        • Water Heating
      • Sector
        • Agricultural
        • Commercial
        • Industrial
        • Residential
        • Utility
  • TAG Portal
      • 2017 Residential Lighting TAG (#14)
      • 2016 Multifamily Building TAG (#13)
      • 2015-1 Commercial HVAC TAG (#11)
      • 2014 Residential Building TAG (#10)
      • 2014 Commercial Building TAG (#9)
      • 2013 Information Technology TAG (#8)
      • 2013 ALCS TAG (#7)
      • 2012 Smart Thermostat TAG (#6)
      • 2012 LED Lighting TAG (#5)
      • 2011 Energy Management TAG (#4)
      • 2010 HVAC TAG (#3)
      • 2009 HVAC TAG (#2)
      • 2009 Lighting TAG (#1)
  • Webinars
    • Webinar Archives
  • Glossary
>

Summary

Adaptive Power Management for IT Equipment

Data Center Power Management: Adaptive vs. None

Taking advantage of power management capabilities in servers and other IT equipment to be able to shut them down or lower the power consumption when not in use.

Synopsis:

Server power management is a huge opportunity for energy savings because most data centers operate at about 6-12% of full capacity, yet idle servers use over half of their maximum power while performing no useful work.  (Glanz, 2012)

Data center power management systems collect and consolidate real-time operating data from power distribution systems to identify “stranded capacity” and minimize the power used for servers that are mostly or completely idle.  This can reduce server energy use by almost a third.  Since reductions in IT equipment energy use decrease the need for power and cooling, power management also helps to free up power and cooling equipment capacity needed to handle the explosive growth in data center power density.  Energy savings will be greater if workloads can be consolidated among more active servers, allowing other servers if not entire racks of servers to be powered down with little or no impact on performance.

However, power management is an underutilized opportunity because data center managers have concerns about impacting server response time and lack incentives to improve efficiency.  This is also due to the manager’s lack of understanding of how to select and implement power management and how to quantify the benefits, as well as a lack of incentives for them to reduce energy use.  Only a third of respondents in a recent survey had implemented any form of power management, and all but one of them felt they lacked the tools to quantify the energy savings impacts.(Pflueger, 2010)

Energy Savings: 20%
Energy Savings Rating: Limited Assessment  What's this?
LevelStatusDescription
1Concept not validatedClaims of energy savings may not be credible due to lack of documentation or validation by unbiased experts.
2Concept validated:An unbiased expert has validated efficiency concepts through technical review and calculations based on engineering principles.
3Limited assessmentAn unbiased expert has measured technology characteristics and factors of energy use through one or more tests in typical applications with a clear baseline.
4Extensive assessmentAdditional testing in relevant applications and environments has increased knowledge of performance across a broad range of products, applications, and system conditions.
5Comprehensive analysisResults of lab and field tests have been used to develop methods for reliable prediction of performance across the range of intended applications.
6Approved measureProtocols for technology application are established and approved.
TAG Technical Score:  2.92

Status:

Details

Adaptive Power Management for IT Equipment

Data Center Power Management: Adaptive vs. None

Taking advantage of power management capabilities in servers and other IT equipment to be able to shut them down or lower the power consumption when not in use.
Item ID: 508
Sector: Commercial, Industrial
Energy System: Electronics--Information Technology
Technical Advisory Group: 2013 Information Technology TAG (#8)
Average TAG Rating: 3.33 out of 5
TAG Ranking Date: 10/25/2013
TAG Rating Commentary:
  1. It works, it is common, it is free!  Not emerging, should NOT be incentivized - need training (will never happen if IT does not pay power bill - if they pay the power bill it will happen next day!).
  2. This is not an ET as this software has been around for over 8 years (VMware distributed Power Management). But there are huge barriers for this ECM's adoption.

Synopsis:

Server power management is a huge opportunity for energy savings because most data centers operate at about 6-12% of full capacity, yet idle servers use over half of their maximum power while performing no useful work.  (Glanz, 2012)

Data center power management systems collect and consolidate real-time operating data from power distribution systems to identify “stranded capacity” and minimize the power used for servers that are mostly or completely idle.  This can reduce server energy use by almost a third.  Since reductions in IT equipment energy use decrease the need for power and cooling, power management also helps to free up power and cooling equipment capacity needed to handle the explosive growth in data center power density.  Energy savings will be greater if workloads can be consolidated among more active servers, allowing other servers if not entire racks of servers to be powered down with little or no impact on performance.

However, power management is an underutilized opportunity because data center managers have concerns about impacting server response time and lack incentives to improve efficiency.  This is also due to the manager’s lack of understanding of how to select and implement power management and how to quantify the benefits, as well as a lack of incentives for them to reduce energy use.  Only a third of respondents in a recent survey had implemented any form of power management, and all but one of them felt they lacked the tools to quantify the energy savings impacts.(Pflueger, 2010)

Baseline Example:

Baseline Description: Data center with nominal implementation of adaptive power management features
Baseline Energy Use: 1500 kWh per year per square foot

Comments:

Energy Star, in their latest presentation of data center efficiency, estimates that typical data centers use about 400 kBtu/sf/year. (Sullivan, 2010 Pg 22) That translates to 1406 kWh/sf/year.  Given that many data centers have implemented some form of power management features but are still notably underutilized, this was conservatively rounded up to 1500 kWh/sf/year as a reasonable estimation. 

Manufacturer's Energy Savings Claims:

Comments:

Because this is a variable, customized set of hardware, software, and strategies that can be applied at some combination of four interactive levels, there isn’t a clear manufacturer that can make specific energy savings claims. 

Best Estimate of Energy Savings:

"Typical" Savings: 20%
Low and High Energy Savings: 5% to 35%
Energy Savings Reliability: 3 - Limited Assessment

Comments:

Data center power management is in some ways similar to server virtualization and to variable module management systems (#492).  They both serve to consolidate computing among a fewer number of servers so that others can be put to sleep, powered down, or removed.  Therefore, the energy savings possible from power management partly depends upon how much virtualization has already been done.  In a highly virtualized data center, computing is already quite consolidated so fewer savings are possible.

Eergy savings for this technology reflect a reduction in energy use of an entire data center, not just the IT equipment, because server energy use impacts infrastructure (cooling and power distribution) energy use.  If the server energy use is decreased by 30%, the overall data center energy will only drop by about 12% unless the ramifications of server energy use reduction on infrastructure is considered.  It is challenging to estimate those ramifications.  They are less than proportional.  Infrastructure system efficiencies mostly drop off significantly at low loads, although some newer equipment is optimized for efficiency at lower loads.  These systems are already typically considerably oversized because data centers operate far below maximum capacity unless modular rather than centralized power distribution and cooling systems are used.  Therefore, further reductions in load typically push infrastructure efficiency down further. 

Finally, impacts on infrastructure also vary with the type of power management strategy implemented.  When workload is consolidated into fewer servers, essentially dynamic virtualization, infrastructure energy savings are larger.  They are maximized when the concept of larger power cycle units (see below) is implemented.  The savings also vary with the nature of the workloads processed in the data center, as seen below in savings estimates for PowerNap. 

The rough energy saving estimates above assume that the reductions in IT equipment load capture two-thirds of the maximum impact on infrastructure energy use. Details on the approaches below can be found in the Detailed Description field of this document.  This is certainly not a comprehensive list of power management technologies and strategies, but rather intended to provide some good examples.

  • Implementing virtual power management policies has been shown to reduce server energy use by up to 31%.  When workloads are consolidated into more power-efficient servers, energy savings can rise to 34% (Nathuji, 2007 Pg 266). 
  • DVFS (dynamic voltage and frequency scaling) saves 18-23% of power across a wide range of workloads. (Meisner, 2009)
  • PowerNap energy savings varies considerably by work type: Mail-35%, Web-59%, Backup-61%, DNS-77% (Meisner, 2009 Pg 6)
  • RAILS increases power supply units (PSU) efficiency from about 68% to 86% (Meisner, 2009 Pg 9-10)
  • The combination of PowerNap and RAILS (see Detailed Description) has been demonstrated to reduce average server power consumption by 74% (Meisner, 2009 Pg 6)
  • Utilizing larger power cycle units (PCUs) that can be powered down as a unit along with the associated cooling and power distribution unit can increase energy savings significantly (Ganesh, 2013 Pg 11).
  • Emerson Network Power estimates that power management can reduce total data center energy by 10% (Emerson, 2013).
Energy Use of Emerging Technology:
1,200 kWh per square foot per year What's this?

Energy Use of an Emerging Technology is based upon the following algorithm.

Baseline Energy Use - (Baseline Energy Use * Best Estimate of Energy Savings (either Typical savings OR the high range of savings.))

Technical Potential:
Units: square foot
Comments:

Very similar to technologies #164 Server Virtualization and #492 Variable Module Management Systems. 

First Cost:

Installed first cost per: square foot

Comments:

No estimate of cost has been entered, partly because adaptive power management for IT equipment is not a single technology like an LED lamp.  It is some combination of a collection of technologies and strategies that can be implemented at several interactive levels.  None of the extensive collection of literature made any estimate of cost for implementation.

Cost Effectiveness:

Simple payback, new construction (years): N/A

Simple payback, retrofit (years): N/A

What's this?

Cost Effectiveness is calculated using baseline energy use, best estimate of typical energy savings, and first cost. It does not account for factors such as impacts on O&M costs (which could be significant if product life is greatly extended) or savings of non-electric fuels such as natural gas. Actual overall cost effectiveness could be significantly different based on these other factors.

Detailed Description:

It’s key to first understand why power management is important.  Data centers are notoriously inefficient when operating at low loads, which is how most operate—at about 6-12% of full load. (Glanz, 2012) Unfortunately, idle servers use more than half the energy of fully active servers (Meisner, 2009).  There are two areas where the energy use of a data center can be improved.  One is the IT equipment and another area is data center infrastructure (cooling and power distribution systems).  Adaptive power management focuses on IT equipment, but reducing server energy use inherently reduces the energy use of infrastructure equipment. 

There are some widespread misconceptions about server energy use.  With some equipment, energy use is roughly proportional to useful work; the more energy used, the more heat, light, or pumping is performed.  However, servers typically use more than half of their maximum energy use while performing little or no useful work (Pflueger, 2010 Pg 7) (WSU, 2013).  Newer servers are being developed that only use 35% of peak power while idling (Bhattacharya, 2012).  While utilization rates can be improved with virtualization and other data management strategies, there are challenges to this.  One is the need to accommodate customer’s occasional higher needs for computer power.  Another is critical data that isn’t easily virtualized (Meisner, 2009 Pg 1). However, of the new data being produced in the world, most is accessed as read-only or as new data rather than updates, and this makes it a good target for power management because replicas can be powered down without adversely impacting data access (Ganesh, 2013 Pg 5).

Power management, though, can be tricky.  Typical idle periods, although frequent, last seconds or less, and thus require more complex management systems (Meisner, 2009 Pg 1).

Saving energy is the primary but not the only benefit of power management.  Half of the respondents in the Roadmap survey discussed below anticipated capacity constraints in one of their data centers within the next 24 months, and worried about a lack of power and/or cooling capacity in the face of explosive load growth.  Power management can help reduce the need for power and cooling capacity (Pflueger, 2010 Pg 17).

There are two primary approaches to address the energy waste of underutilized servers.  One is to utilize virtualization to gather useful complementary workloads and compile them onto fewer servers so that the servers have a much higher utilization rate (WSU, 2013). Another is to utilize power management to identify idle servers and power them down or at least put them into hibernation (Pflueger, 2010 Pg 7).  They can both effectively generate significant energy savings and can both be implemented, but it should be noted that their savings potentials are not additive; implementing one reduces the savings potential of the other.

It is also important to understand what is actually meant by power management.  “Power management” is not one but a collection of technologies and strategies that combine to reduce the energy use of idle servers.  Which features are available in a data center is partially a function of the age of the server; the newer the servers, the more power management options can be implemented.  Furthermore, the features can be applied at the component level, the system level, the rack level, and/or the data center level, each interacting with other levels.  Different power management features have different potential impacts on server performance, so it’s important to understand these and select products that minimize the adverse impact on the performance criteria of greatest importance to a data center manager and the end use customers.  For more on this, see the “End User Drawbacks” section of this document.

The following are examples of power management technologies and strategies.

Server Workload Consolidation

Computing workloads can be consolidated into a fewer number of active servers first through periodic virtualization and then through “load localization,” which is a dynamic consolidation of workloads among remaining servers.  Using load localization, data center operators can maximize the number of idle servers and the duration that they are idle, thereby optimizing the potential energy savings from powering down idle servers and their associated infrastructure.  For purposes of power management, if the servers within a “power cycle unit” (PCU) can be powered down along with their associated cooling and power distribution equipment, energy savings can be maximized.  This can be accomplished by sharing data across PCUs for data redundancy (the industry standard is three replicas of data) so one PCU can be powered down without impacting the availability of that data (Ganesh, 2013 Pg 3-4) (WSU, 2013).

Underprovisioning

Since all the components in all the servers in a data center are never going to be fully active simultaneously, data center designers can size the provisioning of power and cooling to the actual observed peak energy use, which is likely to be much less than the server name plate ratings.  This has the benefits of savings on provisioning equipment costs as well as allowing provisioning equipment to operate at a higher level with a greater efficiency.  However, measures need to be taken to ensure that changes in IT equipment and software don’t result in their exceeding the need for power and cooling, possibly resulting in equipment failures.  See Power Capping below for more information on this (Bhattacharya, 2012).

Sleep States

The ability to modify sleep states is provided with most modern servers.  Sleep states greatly reduce power requirements of a computer while it is idle.  While sleep states are quite common among mobile devices, laptops and desktop computers, they are rarely used in current servers due to a perception of unacceptably long restart delays.  Also, unlike consumer devices, servers don’t have a user to determine when it’s time to wake up (Meisner, 2009 Pg 3-4) (Pflueger, 2010 Pg 7).

Power Capping   

Power capping is primarily used to protect racks of servers from damaging power spikes by limiting the amount of power that can be supplied to that equipment.  Most new servers have power capping mechanisms and most system management software is equipped to take advantage of power capping.  With automated granular power and temperature monitoring, dynamic power capping adjusts the cap to match workload and modifies CPU frequencies on the fly.  However, not all equipment is compatible with power capping.  Some legacy hardware may be unable to respond to a power cap (Klaus, 2013).  Power capping is also used in data centers where power and cooling capacity is sized to actual observed peak power rather than equipment nameplate ratings to avoid spikes in software activity that exceed the infrastructure capacity provided and cause system failures.  For this usage of power capping, it needs to have enough speed and stability to handle the dynamic changes in power needs (Bhattacharya, 2012).

Power Nap

This strategy addresses the fact that server idle periods may only last a few seconds or less.  It transitions an entire blade server system between a high-performance active state and near-zero power idle state (6% of peak power) in response to instantaneous load.  Special system components signal the beginning and completion of computing work. 

RAILS

Redundant Array for Inexpensive Load Sharing (RAILS) uses a power supply that is more efficient across the range of power needed, raising average efficiency from 68 to 86%.  The Northwest Energy Efficiency Alliance’s 80 PLUS program achieved 70% market penetration of desktop computers by 2012 and hopefully impacted server power supplies as well, although 80 PLUS only requires efficiency thresholds for typical loads and not all the loads used with some power management approaches such as Power Nap.   

DVFS

There are efforts underway to make server energy use more proportional to work output.  Processor dynamic voltage and frequency scaling (DVFS -- also known as processor throttling) provides energy savings under reduced loads that can be up to the cube of the proportional speeds.  While a useful contribution to overall energy savings in modern servers, processors only account for about a quarter of the total server energy load.  Efforts are underway to expand energy proportional computing from DVFS to the entire server, although some components have inherent fixed losses so there are limits to this expansion (Meisner, 2009 Pg 1-4, 8-10).

Barriers to Implementation

Finally, it’s important to comprehend how data center professionals understand power management and why it isn't implemented more.  The Roadmap (TGG) interviewed 20 data center professionals from 19 organizations about their understanding and implementation of server power management systems.   Respondents included end users, product vendors, and consultants.  The survey identified several potential obstacles to more effective and widespread implementation of power management features (Pflueger, 2010 Pg 4-6).  While 20 is a small set of survey respondents to fully understand a national industry, the total number of servers under management of the participants’ organizations was significant, ranging from 250 to over 100,000 and totaling around 500,000 servers not counting related storage and networking equipment. Additionally, these server populations ranged from almost completely homogeneous to heterogeneous (from a wide range of manufacturers) (PfluegerMeisner, 2010 Pg 15).

The first obstacle is a lack of understanding.  Many respondents misunderstood the technical details of power management and how to implement it as well as how to account for the benefits.  Only a third of respondents had implemented any form of power management, and all but one of them felt they lacked the tools to quantify the energy savings impacts.

Another obstacle is the “split incentives” whereby data center managers aren’t supported and incentivized to cut energy use and so don’t benefit from doing so.  Support would include training data center staff to research and explore options, to invest in implementation and track savings, and then to receive some benefit from that accomplishment. This can be true even if there is a facility manager or energy manager with an interest in reducing server energy use but no authority.  It’s not uncommon for data center managers to not even have access to utility bills.  Data center managers also certainly don’t get any incentives from customers to implement power management features.  Thus most managers understandably focus largely on their service level agreements (SLAs) with customers that highlight data reliability and performance.

Some respondents were concerned about the impact of power management on server availability and performance.  One mentioned that vendors typically shipped equipment with the power management features disabled, and took this as a reflection of a general lack of trust of power management features to not adversely impact system reliability and performance.  

Finally, some identified the challenges of assessing power use of servers with so much variety in server and application types.  They felt the need for one tool to handle this, rather than having to learn to use different power management systems for a variety of equipment types.  For example, while most servers have sleep/hibernation states, there are a number of ranges of sleep states that may be used in a server, including G, S, C, D, P, CC, and LL states.

What seems to be needed to spur implementation of power management is education and outreach as well as the development of methods to more easily collect power and energy data and control power management with a variety of equipment. Next, quantify the benefits in terms of reductions in energy use as well as capital expenditures.  It’s also important for data center managers to negotiate SLAs (service level agreements) with customers to not preclude the implementation of reasonable power management features.  To encourage behavioral intention to adopt any new technology requires developing a perception of usefulness and ease of use.  While both critical, the latter trumps the former in importance.  Strategies for encouragement need to make implementation and documentation as easy as possible (Pflueger, 2010 Pg 4-6).

Product Information:
Server Technology, Sentry Power Manager Intel, Node Manager Avocent, Rack Power Manager BayTech, Global Power Management Emerson, Energy Logic 2.0

Standard Practice:

A Roadmap study of 20 data center professionals found that only a third of them had implemented any form of power management (Pflueger, 2010 Pg 5). Thus, standard practice can be assumed to be a data center with little or no existing power management.

Development Status:

The software, hardware, and strategies needed for effective power management are available in the marketplace.

Non-Energy Benefits:

A non-energy benefit is increased infrastructure capacity.  Half of the respondents in a Roadmap survey anticipated capacity constraints in one of their data centers within the next 24 months. The respondents worried about a lack of power and/or cooling capacity in the face of explosive load growth and a steady increase in power density in existing data centers.  Power management can help reduce the need for power and cooling capacity (Pflueger, 2010 Pg 17).

End User Drawbacks:

Different power management features have different potential impacts on server performance. It's vital to understand these features and select technologies and strategies that don’t adversely impact the performance criteria important to a particular data center manager and their customers or users.  The following are some more common power management features and their potential impacts on server performance.

Monitoring, measuring, and reporting should have no impact on performance since it’s just data collection.

Scaling power up and down with performance shouldn’t impact performance because idle servers remain fully active, although it does add some complexity to the system and this can add a bit of general risk for malfunction.

Putting a server into sleep, hibernate, or suspended mode incurs some minor risk of the server not “waking back up” quickly enough, but peak performance is unconstrained (Pflueger, 2010 Pg 12).  Furthermore, average server idle times are very short, so energy savings attributed to powering servers down may be minimal or even negative (Ganesh, 2013).

Power capping allows idle servers to remain fully active but peak performance may be constrained if it becomes necessary to enforce the power cap (Pflueger, 2010 Pg 12).

Operations and Maintenance Costs:

Comments:

No O&M costs were estimated partly because adaptive power management for IT equipment is a not a single technology like an LED lamp. It is some combination of a collection of technologies and strategies that can be implemented at several interactive levels.  None of the extensive collection of literature made any estimate of cost of operation and maintenance for power management systems.

Effective Life:

Comments:

The effective life of adaptive power management equipment, like most equipment in a data center, is longer than the typical frequency with which such equipment is replaced due to rapid increases in technologies.  For most data centers, IT equipment is replaced every 2-5 years, so power management equipment would have that life expectancy.

Competing Technologies:

While both adaptive power management and server virtualization are excellent measures for reducing data center energy use, implementing one decreases the potential energy savings of the other, so their predicted savings are not additive.  They could both be implemented in an integrated fashion, whereby complementary workloads are consolidated onto a smaller number of active servers through virtualization. In this way, inactive servers can be powered down and not provided with unnecessary power and cooling. 

Reference and Citations:

Sullivan, 2/4/2010. Energy Star for Data Centers
Energy Star , 1

James Glanz, 09/22/2012. Power, Pollution, and the Internet
New York Times

John Pflueger, 01/01/2010. Roadmap for the Adoption of Power-related Features in Servers
The Green Grid
Special Notes: Green Grid White Paper #33

David Meisner, 03/01/2009. PowerNap: Eliminating Server Idle Power
Carnegie Mellon University
Special Notes: This article is also published by the University of Michigan and by TechRepublic.com.

Ripal Nathuji, 10/15/2007. VirtualPower: Coordinated Power Management in
21st ACM Symposium on Operating Systems Principles
Special Notes: Both authors are from Georgia Tech University

Lakshmi Ganesh, 01/21/2013. Integrated Approach to Data Center Power Management
University of Texas at Austin
Special Notes: Also published by the Institute of Electrical and Electronics Engineers in IEEE TRANSACTIONS ON COMPUTERS, vol. 62, #6 in June, 2013.

Aihua Liang, 6/5/13. Adaptive workload driven dynamic power management for high performance computing clusters
Elsevier , Computers and Electrical Engineering
Special Notes: This is not publically available on-line.

WSU EEP, 11/14/2013. Server Virtualization
Washington State University Energy Program

Emerson, 11/14/2013. Server Power Management
Emerson Network Power

Jeff Klaus, 04/08/2013. Power Capping Puts IT Back in Control
The Data Center Journal

Arka Bhattacharya, 05/17/2012. The Need for Speed and Stability in Data Center Power Capping
Microsoft

James Kaplan, 03/09/2011. Revolutionizing Data Center Energy Efficiency
McKinsey & Company

Rank & Scores

Adaptive Power Management for IT Equipment

2013 Information Technology TAG (#8)


Technical Advisory Group: 2013 Information Technology TAG (#8)
TAG Ranking: 9 out of 57
Average TAG Rating: 3.33 out of 5
TAG Ranking Date: 10/25/2013
TAG Rating Commentary:

  1. It works, it is common, it is free!  Not emerging, should NOT be incentivized - need training (will never happen if IT does not pay power bill - if they pay the power bill it will happen next day!).
  2. This is not an ET as this software has been around for over 8 years (VMware distributed Power Management). But there are huge barriers for this ECM's adoption.


Technical Score Details

TAG Technical Score: 2.9 out of 5

How significant and reliable are the energy savings?
Energy Savings Score: 3.2 Comments:

  • Should be a no brainer for implementation. The challenge is persistence
  • Most of the power savings are outside the DC
  • Not sure about the reliability as the technology summary did not have background on sources.
  • IT is risk adverse to server power management
  • Servers spend much of their time at low load, so power management has bit potential to achieve savings. I've seen estimates of 30%.
  • Savings persistence and verification are issues, because mode can be changed at any time. Also zero cost, except to change mode from default.
  • Savings can be significant but the reliability and ability to verify settings and savings are very poor. To easy to change by IT personnel.
  • Energy savings would be highly dependent on the number of hours that the equipment could be shut down. Very reliable estimates are available for PC's or equipment supporting a defined operational schedule (office building, virtual desktop servers). Research would be needed to be estimate energy savings available for servers, networking equipment, VOIP phones.
  • Significant, but not easily quantifiable or reliable. Energy savings depend wholly on the scheduling or management parameters selected by the operator, and can be changed at any time. Energy savings are also dependent on the load cycles of the equipment, which can also change over time. It would be very difficult to justify a prescriptive rebate program for this measure.
  • Depends on if this includes power management of just the server, or of office equip as well - I assumed it included everything
  • Because of the ease to turn off the power manager can cause issues, there will be a need for recorded results

How great are the non-energy advantages for adopting this technology?
Non-Energy Benefits Score: 2.0
Comments:

  • I'm not aware of any.No significant non-energy benefits to the IT system managers/Owners.
  • Unsure
  • I'm not aware of any non-energy benefits from adoption of power management
  • In fact, it has perceived disadvantages. IT managers are loth to move equipment into sleep mode or off due to concerns of meeting service level expectations ("Will it reliably come back on when we need it?)
  • Maybe less maintenance? Not sure

How ready are product and provider to scale up for widespread use in the Pacific Northwest?
Technology Readiness Score: 3.3
Comments:

  • Power management software exists however end user acceptance may be a problem
  • It is free and built in to most equipment.
  • Many of these powersaving features are included in the products administrators are already using. However, some fear that they diminish the peak effectiveness of the equipment.
  • Seems to be widespread availability.
  • Deployment via writing layer of programming, which is difficult to verify.
  • Intel is installing the function on the chip but how the savings is proved short or long term are still unknown.
  • Very good - simply needs to be enabled
  • The IT managers and staff have been very reluctant to embrace existing power management technologies due to performance concerns. High risk of downtime or performance degradation and very little reward (they don't pay power bills). Implementing these technologies adds to worload of IT staff that are typically heavily subscribed with existing operations.
  • Most servers made in the past few years have a feature set that supports power management, and there are software packages to manage server fleets from reputable, leading companies.

How easy is it to change to the proposed technology?
Ease of Adoption Score: 2.9
Comments:

  • This will take some education for the IT staff and end users
  • If administrators can be convinced that switching to powersave modes doesn't affect the peak performance, they can easily adopt this measure with their pre-existing equipment.
  • There seems to be lots of institutional resistance to using this technology.
  • Deployment via writing layer of programming -not sure what expertise is required for this.
  • Adoption into the equipment is very good but for the end-user to fully adopt the technology to produce savings is very low or poor.
  • Very good - comes with most servers
  • I'm not aware of how IT power management applications can be implemented beyond the use of network PC power management. Given anecdotal evidence that these technologies are not being adopted for servers I would assume that ease of adoption is low for this technology.
  • This would be the hardest to implement and achieve widespread adoption, especially if you're trying to control tenant power management of their plug load devices. Sticking to servers would be easier, but would still require alot of outreach and education

Considering all costs and all benefits, how good a purchase is this technology for the owner?
Value Score: 3.2
Comments:

  • Cost seems to be low. Benefits high. The barriers seem to be institutional, not financial.
  • Very good - comes with most servers
  • I don't think I have enough detail on this technology to provide a good assessment of costs / benefits. Energy benefits are good for the owner or organization as a whole, but don't accrue to the IT group that implements the technology.
  • If they arent already doing it they should be.



Completed:
12/4/2013 3:57:32 PM
Last Edited:
12/4/2013 3:57:32 PM
Contact
Copyright 2023 Washington State University
disclaimer and privacy policies

Bonneville Power Administration Logo