Export (0) Print
Expand All

Virtualization: Fueling Green Data Centers

Ramnish Singh
Microsoft Corporation

December 2008

Summary: Virtualization is considered a key enabler for consolidation, but it has the potential to play a much larger role in the quest towards Green Information Technology (IT). This paper will discuss strategy from application development; infrastructure requirements; and finally, upcoming trends. First, we will explore the “life of an application,” and what role virtualization can play in its various stages: design and development, deployment, and sustenance. Second, we will explore “life in a rack” and the following phases: hardware, power, rack space, heating and cooling, and finally, hardware fault tolerance. Finally, we will touch on trends to look out for. This article can be used by infrastructure architects to design effective virtual infrastructures that are not over-engineered, but rather are based on the current and projected needs of the organization. (9 printed pages)

Contents

Introduction
Life of an application
Life in a Rack
Future Trends in Virtualization
Challenges
Acknowledgements
Resources

Introduction

Earth is finally fighting back. Globally we are faced with challenges ranging from increased energy demands to carbon emissions and their negative impact on environment. In a research done by Microsoft, daily power consumption of a typical data-center equals the monthly power consumption of thousands of homes, with a staggering 61 billion kilowatt hours going toward data-center energy consumption. In three years, we will need an additional 10–15 power plants to keep up with data-center power-consumption requirements.

Data-center managers have two important operating guidelines—operational efficiency (reducing energy and power requirements and ensuring optimum data-center resource utilization) and applications availability (without sacrificing on application performance). Usually these two are at odds with each other. In order to achieve operational efficiency, a data-center manager needs to introduce new technologies; however, during the implementation of new technologies there will be some application availability challenges. There is also the old-school thought “if it ain’t broke, don’t fix it,” which leads to challenges in adoption of new technology trends. One universally acknowledged mantra for controlling data-center cost and providing application availability is virtualization.

The concept of virtualization is simple: convert physical instances to virtual instances, host multiple virtual instances on fewer physical machines, and reduce power consumption. As predicted by Gartner (April 2008), about 4 million virtual machines are expected by 2009 and 611 million virtualized PCs by 2011, with IT infrastructure and operations deeply impacted by virtualization by 2012. Yet worldwide architects are struggling to answer a simple question—which application(s) should I choose to virtualize, and how can virtualization help in combating the ultimate challenge of conserving power?”

In computing, virtualization is a broad term that refers to the abstraction of computer resources. The term has been widely used since the 1960s. Virtualization lets one computer do the job of multiple computers, by sharing the resources of a single computer across multiple environments. There are many forms for virtualization:

  • Platform virtualization—Separation of an operating system from the underlying platform resources.
  • Resource virtualization—Virtualization of specific system resources, such as storage volumes, name spaces, and network resources.
  • Application virtualization—Hosting of individual applications on alien hardware/software.
  • Desktop virtualization—Remote manipulation of a computer desktop.

The most common implementation of virtualization to date supports the following business scenarios: consolidation (primary objective); business continuity/disaster recovery; test and development; and security. However, virtualization is much more than simply consolidating physical servers and cutting data-center costs. The technology has far-reaching impacts, from optimum resource utilization at one end to substantial savings in energy, power, and management costs at the other.

We will now explore strategies for virtualization in data centers. First we will first dive into the "life of an application," from design to deployment to sustenance; next we will explore "life in a rack" and its hardware, power, rack space, heating and cooling, and hardware fault tolerance; and finally, we will discuss some upcoming trends.

Life of an Application

The life of an application plays a very crucial role in the quest for the optimum virtualized data center. The traditional approach—either a completely virtual or a completely physical environment—leads to suboptimal utilization of resources; power in the case of a completely physical environment, and performance in the case of a completely virtual environment. A detailed analysis of the life of an application helps us understand how virtualization can help from its design and development, to its deployment, and finally to its sustenance.

 

Stage 1: Design and Development

The use of virtualization in a test and development environment is well known. Refer to the IEEE article “Test Optimization Using Software Virtualization”, volume 23, issue 5, Sept-Oct 2006, for relevant details. Also visit The Microsoft Virtualization Case Studies home page for a list of case studies showcasing how organizations have benefited from the implementation of virtualization in their test and development environments.

Stage 2: Deployment

Application deployment is based on how application components are wired together—that is, how they are distributed and/or monolithic. Distributed applications have their components distributed across multiple physical tiers. Deploying all the layers in a single physical server helps maximize its usage. You can convert these physical systems into virtual instances, all hosted on one singe physical server (see Figure 1), thereby increasing the utilization of this single server hardware and reducing the number of physical servers required for implementation. And by replicating such deployments in more servers, you make this scalable (so you can incorporate servers to the farm on-demand).

Dd347164.VirtAndGreenDatacenter01(en-us,MSDN.10).jpg

Figure 1. Virtualization of a loosely coupled application (Click on the picture for a larger image)

Monolithic applications, on the other hand, require further investigation in terms of performance and resource requirements. The available options for these applications are to either retain them in their physical state, or convert them to a single virtual instance and host it on a shared physical system that can provide the necessary amount of required resources. The shared approach will lead to reduced power requirements and rack space, thereby contributing to reduced energy requirements (see Figure 2).

Dd347164.VirtAndGreenDatacenter02(en-us,MSDN.10).jpg

Figure 2. Virtualization of a tightly coupled application

Stage 3: Sustenance

An application has periods of maximum and minimum concurrent use, which is directly proportional to its resource requirement and utilization. You can identify this pattern using any system and application management tools—the System Center family of tools from Microsoft, for example. You can collect appropriate data sets, which can help identify an application's resource requirements, use, downtime, peak resource utilizations, and so on. As an example, let’s explore the life of a performance-appraisal system within an organization. For the purpose of this example, we have collected the application’s usage data over a two-year time period (see Figure 3).

Dd347164.VirtAndGreenDatacenter03(en-us,MSDN.10).jpg

Figure 3. Usage pattern of a performance-appraisal application in an enterprise

As seen in Figure 3, the application has periods of maximum and minimum use, suggesting that during peak usage, it needs sufficient resources for optimum performance. During other times the application resource requirements can be managed by providing reasonably less resources.

This application can be hosted in a data center in two avatars—physical and virtual. The application can transition between physical and virtual states based on its resource requirements and usage patterns. The application can remain in a virtual state until a need for extra resources arises (which can be identified using application and system monitoring tools). Once resource requirements increase, you can move the application from a virtual to a physical state (see Figure 4). You can manage this transition seamlessly using technology available today—for example, the System Center family of products provides tools that aid the transition between physical and virtual states.

Dd347164.VirtAndGreenDatacenter04(en-us,MSDN.10).jpg

Figure 4. Application states—physical and virtual

This concept is better suited for organizations that have limited investments in virtual infrastructure, and would like to conserve power by powering off physical systems when there is low usage, and powering on existing systems when usage increases (for example review Active Power Management technology available from Cassatt).

The process of moving machines between physical and virtual states is shown in Figure 5. In step 1, all machines are in a physical state. In step 2, the physical systems are converted to virtual instances; however, the physical machines are still operational. Step 3 spreads these virtual machines across multiple physical hosts based on their resource requirements, thereby reducing the number of physical servers required. In step 4, a virtual machine requires additional resources—which can be identified by technologies such as Proactive Resource Optimization (PRO), which is available in the System Center family of products. In step 5, the machine is moved from its virtual state to its physical state. Finally, step 6 shows how the virtual instance successfully converted to its physical state, and runs on physical hardware that can provide the necessary resources. The machine can transition back to its virtual state when the resource requirements decrease.

Dd347164.VirtAndGreenDatacenter05(en-us,MSDN.10).jpg

Figure 5. Application state migration (physical to virtual and vice versa) (Click on the picture for a larger image)

Another option for applications that require dynamic resource usage but exhibit monolithic behavior is to convert them from physical to virtual instances. These virtual instances can then be hosted on systems with limited resources during periods of low resource requirements. As resource requirements increase, virtual instance can be moved to systems that can provide the necessary resources. The implementation methods and technology will dictate if there will be any disruption of the services these applications offer during the movement from one physical machine to another. The power requirements of the servers can then be configured to scale as the load on the server increases (see Figure 6). During normal operations, server A hosts only one machine and server B hosts multiple machines. During this state, we can power off some of the CPU cores that are not being used. As the virtual machine needs more resources, they can be moved from server B to server A, and the required amount of memory and CPU cores can be powered on.

Dd347164.VirtAndGreenDatacenter06(en-us,MSDN.10).jpg

Figure 6. Application (virtual) hosted on different systems based on resource requirements (Click on the picture for a larger image)

Certain application architectures allow scaling more efficiently; for example, in a typical three-tier Web-based application, presentation and application layers are built so that they can be scaled out. Scale out works by adding additional physical machines (appropriately configured) during periods of high usage and switching off physical machines during periods of low usage. This elastic behavior (of computing resources) can be implemented with ease though virtualization. Using virtualization, additional virtual machines (on multiple different hosts) can be automatically provisioned and configured to support increased application resource requirements. These machines can then be automatically removed once the need for additional resources diminishes. The most important factor here is to provide sufficient virtual infrastructure that can powered on and off when you want. Figures 7 thru 9 illustrate this elastic behavior.

Dd347164.VirtAndGreenDatacenter07(en-us,MSDN.10).jpg

Figure 7. Application during normal usage (users and systems)

Dd347164.VirtAndGreenDatacenter08(en-us,MSDN.10).jpg

Figure 8. Application requiring additional resources due to an increase in users (load)

Dd347164.VirtAndGreenDatacenter09(en-us,MSDN.10).jpg

Figure 9. Additional virtual machines added on the fly to support increased usage

 

The decision to virtualize is not just a factor of an application’s resource requirements. It also requires a detailed understanding of an application’s design and development, deployment, and sustenance. Also, it is important to understand that proper hardware sizing is a key element for optimum virtualized infrastructure. The rule of thumb is to have as much hardware as necessary to cover the sum of the application's peak usage, in order to ensure coverage of the worst “performance” cases. When coupled with a mechanism to turn the component on or off on demand, this results in the most power-efficient infrastructure.

Life in a Rack

Now that we've explored software, let’s explore the hardware side of virtualization. Virtualization is a key enabler for reducing both power and cooling requirements in data centers. We will explore these opportunities in detail in the next part of the article.

Stage 1: Hardware

It is a well-known fact that x86 hardware consumes ~80 percent of the normal workload power, even when in an idle state. This can be easily addressed by replacing x86 with x64-based hardware, which offers better power management. You can host both x86 and x64 virtual workloads on the same x64-based hardware. (See the VMware article “Energy Efficiency” for more details).

Stage 2: Power

Apart from hardware power requirements, software also plays a key role in conserving power. An analysis of power usage when changing the operating system on the same hardware is shown in Figure 10. (See the Microsoft article “Windows Server 2008: Enabling Energy-Efficient Performance” for more details.) Figure 10 illustrates the fact that the base operating system is a key component for a virtualized infrastructure to help reduce power requirements.

Dd347164.VirtAndGreenDatacenter10(en-us,MSDN.10).jpg

Figure 10. Operating systems and power usage (Click on the picture for a larger image)

Stage 3: Rack Space

Reduction in server sprawl due to increased use of virtualization results in fewer servers in your data center; however, this reduction is usually through the deployment of high-density equipment such as blade servers. There are substantial benefits to reducing the power consumption of IT equipment, because both data-center area and Total Cost of Ownership (TCO) are strongly affected by power consumption. Table 1 shows how further reductions in IT equipment power consumption and size affect data center area and TCO (as described in American Power Conversion [APC] white paper #46). Reductions in power consumption have a much greater benefit than proportional reductions in size.

Table 1. Data-center area and TCO savings from reducing IT equipment size and power consumption (Click on the picture for a larger image)

Dd347164.VirtAndGreenDatacenter11(en-us,MSDN.10).jpg

Blade servers, because of their shared chassis infrastructure for power supplies and cooling fans, achieve a 20–40 percent reduction in electrical power consumption when compared with conventional servers of equivalent computing power. These savings represents a significant TCO savings, because TCO is dominated by power related costs and not by IT space-related costs.

Stage 4: Heating and Cooling

Heat output inside a data center plays a vital role in the design of cooling requirements. As per data-center cooling design recommendations described by APC in their white paper #25, the maximum contribution to thermal output is from IT loads (see Figure 11).

Dd347164.VirtAndGreenDatacenter12(en-us,MSDN.10).jpg

Figure 11. Relative contributions to total thermal output of a typical data center

Since virtualization helps reduce the number of servers required, the most tangible effect is reduction in not just the number of servers, but the total area that needs to be cooled.

In order to help estimate the total cooling requirements, worksheet 1 can help calculate the heat output of your data center, as described in APC whitepaper #25.

 

Dd347164.VirtAndGreenDatacenter13(en-us,MSDN.10).jpg

Worksheet 1. Calculating total cooling requirements for data centers (Source: APC White Paper #25) (Click on the picture for a larger image)

Stage 5: Hardware Fault Tolerance

Strategy for sustaining hardware failure will also play a crucial role in identifying true TCO for virtualization. Because a single physical machine hosts a number of virtual instances, investigating and investing in fault-tolerance infrastructure to support virtualization will help in sustaining and ensuring that Service-Level Agreements (SLAs) are met with regard to business continuity and disaster recovery. Virtualization helps reduce the hardware components required for achieving fault tolerance, thereby reducing the power and energy requirements.

Future Trends in Virtualization

There are many virtualization technologies that will be significant in the field of Green IT in the future: virtual desktop interfaces, application virtualization, and virtual grids are just three such examples. We must explore these technologies in further detail to see how they can contribute in the quest for virtualization and reduction of energy requirements. There are many facets of green computing, and virtualization is one of the keys to lowering energy requirements through the optimized use of computing resources.

Challenges

Virtualization has arrived and is here to stay. However, the technology brings its fair share of challenges, as well. As suggested in the article “Virtualization Industry Challenges,” these challenges can be grouped as follows:

  • Today's challenges (support, licensing, and capacity planning)
  • Near-term challenges (reliability, provisioning, and efficiency)
  • Mid-term challenges (scalability, security, and accountability)
  • Timeless challenges (responsibility)
  • Compliance

It is only a matter of time; the sooner you embrace this technology, the sooner you can start saving money for your organization.

Acknowledgements

  1. Ramkumar Kothandaraman, Director, Microsoft Technology Center (India)
  2. Thorsten Wujek, Technical CEO, STEIN-IT GmbH
  3. Diego Dagum, Architect, Microsoft Corporation
  4. Deepinder Gill, Principle Application Manager, Microsoft IT

Resources

  1. APC White Paper #25: Calculating Total Cooling Requirements for Data Centers
  2. APC White Paper #46: Cooling Strategies for Ultra-High Density Racks and Blade Servers
  3. Active Power Management from Cassatt Corporation
  4. System Center and Virtualization family of products & technologies from Microsoft Corporation
  5. Virtualization products and technologies from VMware Corporation
Show:
© 2014 Microsoft