Servers are essentially large computers. However, the full extent of their capacity may be unused, largely because data center managers focus on reliability and not energy efficiency. Until recently, it was fairly common to have a separate physical server for each application. Additional servers were used for testing, staging, and in case of disaster, so there could be multiple servers per application (Energy Star, 2012). In 2012, the McKinsey and Company found the average server utilization rate to be 6% to 12%, and other studies have reached similar conclusions (Glanz, 2012).
A virtualized server consolidates applications and sets of data, possibly from different customers' companies, so one server may now handle the tasks previously performed by multiple servers. Instead of running many servers with low utilization rates, you can run fewer servers with higher utilization rates. A New York Times article stated that one company has been able to operate at over 96% virtualization by queuing up large jobs and another company shrunk their server space by 60% after virtualization and server replacement (Glanz, 2012). The University of California in Santa Cruz had servers operating at a 5% utilization rate; using virtualization, these servers now operate at 70% utilization rate (LBNL, 2007, page 2).
In addition to a one-time or periodic virtualization, workloads can be orchestrated among servers to maximize the concentration of computing activity on active servers while maximizing the opportunity to power down idle servers for a longer time. Some strategies to accomplish this include:
- MAID (massive array of idle disks) concentrates popular data on a new “cache” disk (Colarelli, 2013).
- PDC (popular data concentration) uses a subset of the server for the most popular data (Pinheiro, 2004).
- Power-aware caches are used to house the data of spun-down servers to increase their idle time (Francis, 2004).
- Write-offloading diverts write-access from spun-down disks to active disks, localizing write access (Narayanan, 2008).
- SRCMap (Sample-Replicate-Consolidate Mapping) is similar to MAID and PDC and uses write-offloading (Verma, 2010).
Not all servers are good candidates for virtualization. Virtualization generally works well with X86 servers, which are currently the most common type. As for minimum appropriate server room size, the National Resources Defense Council (NRDC) estimates that virtualization can be cost-effective for businesses with at least five servers (Bennett, 2012). VMware has a version of their vSphere virtualization kit that costs only $500, so even small server rooms can take advantage of virtualization (VMware, 2013). Servers with privacy, security, or regulatory restrictions may do best with dedicated servers. Servers that are good candidates for virtualization and have similar workload types are grouped and then moved to as many physical servers as are needed to accommodate them in virtual form. It is important to observe server loading for a month to see when computing activity is busiest and combine servers with complementary workloads (e.g., one is busy during the day and another is busy at night) (Energy Star, 2012). Servers that are selected must have adequate memory for the task, as this can be more of a limiting factor than processor speed. An Information Week survey of large data centers found that the median memory (RAM) capacity was 48 GB per server and almost a fourth of these data centers used servers with 128 GB of memory (Marko, 2012). Consolidating servers also consolidates the cooling load. If the virtualization process continues, total data center use may become considerably higher. Therefore, check that the existing cooling system is up to the task (Energy Star, 2012).
Virtualization may be done in phases, as suggested by the Green Grid. The first phase is virtualizing non-critical workloads. The second phase includes critical workloads, which usually involves a need for additional storage and network reconfiguration. The third phase is implementing a “virtual first” policy for future servers. The final phase is to out-source computing to the cloud (Talabar, 2009).
As of 2011, 55 utilities in 20 states offered financial incentives for virtualization projects. Most of these provide a simple prescriptive incentive: $X per server removed. Calculated or customized incentives are more accurate but may be less applicable for small- and medium-sized businesses. Some utilities offer $/kWh for any documented data center savings. The Sacramento Municipal Utility District (SMUD), for example, offers $.04 per kWh saved up to 30% of the project cost or $150,000. Some utilities may want to consider a hybrid model that offers options of prescriptive, calculated, and customized incentives. Utilities should encourage recycling of old servers and work to ensure that those that are not recycled are at least removed from the grid. Utilities may want to skip inspections of smaller projects to help control administrative costs. Finally, larger utilities may do well to assign utility staff to learn about virtualization so they can advise customers.
A side benefit of virtualization incentive programs for utilities is they provide an opportunity to develop a relationship with IT managers who may be willing to consider infrastructure efficiency upgrades after the virtualization project (Lester, 2011). NRDC suggests that these companies’ use of virtualization can be expanded through utility programs with robust marketing and outreach, training, and education programs for IT managers of small server rooms and the IT service provider firms that support them. This would ideally be at a time when IT managers are preparing for a major server room upgrade anyway. Utilities can also articulate the non-energy benefits of virtualization that would apply to small businesses that might lack utility bill incentives. Utilities could also act as aggregators for manufacturers and providers of virtualization services that have not targeted the small business market (Bennett, 2012).
Because large data centers have more readily embraced virtualization than smaller data centers, utilities may be concerned about free-ridership in justifying their incentives to rate-payers. The Green Grid has found that, while many companies may have adopted virtualization, it is typically only at a shallow level with very small consolidation rates. Therefore, utilities still stand to gain substantial energy savings by incentivizing and encouraging customers to get beyond the “low hanging fruit” ("innovation" servers) into deeper savings ("mission-critical" servers) (Talabar, 2009).
Virtualization products and services are provided by VMware, Citrix, Oracle, Red Hat, and Microsoft with costs ranging from $200 (on sale) to tens of thousands of dollars. Most require or at least encourage annual subscription and support fees.