Getting the Most Out of Virtualization

Greg Hutch

June 2010


Summary: Virtualization is a powerful architectural technique and comprises a set of technologies that can drive a range of benefits.



Most organizations fail to realize the full potential of virtualization.

While various virtualization technologies have been around for a long time, we now have the pieces that are required to drive complete virtualization strategies that are optimized for different business scenarios.

The origin of virtualization—and the most common reason for an organization to pursue virtualization—is efficiency and the potential cost savings. The following virtualization strategy was focused on reducing costs and increasing efficiency.Virtualization has a profound impact on IT environments, as it abstracts the physical characteristics of computing.


Virtualization for Efficiency

The organization needed to drive down all costs, including IT, as much as possible. At the same time, it was under pressure from its customers to provide more online services. The organization needed to get as much value from its assets as possible.

We started with monitoring and analysis of the existing server workloads. We quickly found that the existing servers were old single-purpose servers that had very low CPU-utilization rates; they had been sized to have enough capacity for their peak utilization, which was rarely needed. Report servers were sized for a peak that occurred only at month's end. Backup servers were idle during business hours. File servers never used their full processor capacity and were almost idle outside of business hours. Security servers were busy only during business-day mornings. The disaster-recovery servers were sized for their emergency workloads, but never ran at that level outside of testing. It was very clear that virtualization could help us consolidate these workloads in fewer processors. (The situation with memory was comparable.)

It would have been easy to cut the number of servers in half, but we wanted to optimize virtualization. In the best case, we were able to virtualize and consolidate 30 of the older single-purpose servers in a single new server. This was an unusually aggressive consolidation approach, but it proved supportable.

Before virtualization, each server supported an application or service that had a unique set of service expectations. A small number of applications had specific high-availability requirements, and special servers had been allocated to provide that service. High-availability clusters generally demanded specific configurations, special versions of software, and an active/passive mode that drove (at best) 50-percent productivity from the clusters. Moving to a virtualized approach meant that we could reduce the overhead and still provide fault-tolerant platforms. To maximize consolidation, we standardized the service levels. This allowed us to consolidate workloads that had identical or compatible service levels.

Another cost-reduction opportunity revolved around networking. The large number of physical servers led to a large number of network adapters, ports, and media types. It was clear that virtualization could allow us to reduce the number of network connections, ports, and media types.

Another cost-saving opportunity was the ability to buy capacity "just in time." In the past, servers were ordered with extra resources, because changes were challenging and time-consuming.

When we had driven the reduction in servers and networking, there was far less demand for data-center resources. We needed far less physical space, electrical service, uninterruptible-power-supply (UPS) capacity, and air-conditioning capacity. The result was that we could cancel the planned data-center upgrades. We did need to change the configuration of the data center, to avoid the creation of hot spots as we consolidated. While the project was not driven by a "green agenda," the efficiencies did contribute to a significant data-center "greening."

We also discovered that virtualization led to larger-than-expected licensing savings. Increased CPU efficiency and a reduction in the number of servers allowed us to reduce significantly the license requirements for the organization.

In summarizing this experience, we learned that virtualization for cost savings appears to be applicable to almost all organizations and leads to larger-than-expected savings. Furthermore, when we benchmarked with other organizations, we learned that most organizations fail to drive out all of the potential cost savings simply because they do not plan sufficiently.


Virtualization for Increasing Agility

Virtualization is also a very powerful approach toward increasing the IT agility of an organization. This example of a virtualization strategy comes from an organization that was frustrated by long "time-to-solution" times. The strategy for agility was based on the same fundamental technologies, but it required even more planning to optimize. In the first example, we used virtualization to put the largest possible workload on the least expensive resources. In this example, we use virtualization so that we can quickly deploy and change the allocation of resources to workloads.

Before virtualization, each project required several weeks to acquire new hardware that was ordered to the specific needs of the new service, application, or project. The organization frequently encountered capacity problems, and significant downtime was required when capacity had to be added.

Improved architectural standards were required to ensure that new applications would be deployed quickly. We created two standard virtual servers. This allowed us to cover the broadest range of commercial applications and support the management tools that we needed to achieve the agility and efficiency targets. Virtualization allowed us to create pools of CPU and memory capacity that could be allocated quickly to a new application or a temporary capacity issue. This allowed the organization to purchase generic capacity and allocate to specific needs, as  required.

Virtualization also supported "on-demand" capacity. We could move quickly to allocate resources to temporary demands. We needed to plan carefully for the current and expected network demands, as network changes in a virtualized environment are challenging. As of this writing, there are recent products that promise to bring network virtualization to the same level as servers.

The virtualization tools allowed administrators to move running workloads easily between hardware platforms. This allowed administrators to move a workload, upgrade a platform, and move the workload back, with no user impact. The ability to add resources, allocate them, and maintain them without user downtime dramatically changed the scheduling of project activities. This capability allowed us to perform application- and server-setup tasks without having to schedule around downtime windows or service levels.

One of the first opportunities to increase agility was in the development and test environments. We standardized testing and promotion processes. Before virtualization, each application project tended to design its own approach for testing and promotion. The approach was "optimized" to the needs of the specific project—minimizing resource cost and critical path. This tended to introduce long setup times for the test environment and unique promotion processes. After virtualization, we had a standard approach to allocate and share environments. We also had standard processes to promote applications. The time that is required to set and rest these environments was dramatically reduced.

Increasing the agility of CPU and memory deployment would have meant little without improving the agility of storage. It was critical to have as little local storage as possible. It was also critical to consolidate storage-area-network (SAN) islands. Local storage and separate SANs caused delays and complexity during deployment of new applications. Creation of large pools of storage that had the required performance and reliability characteristics was essential for agility. We created redundant high-performance paths between the servers and storage, so that we could quickly assign storage to server resources.

In summarizing this experience, the biggest benefit was a 96-percent reduction in the time that was required to deploy a new application. A dramatic example of this new agility occurred shortly after virtualization was complete. A project required a very significant increase in capacity. Sufficient generic capacity was ordered. While the equipment was being shipped, the organization's priorities and projects changed. When the new generic gear arrived, it was easily allocated to the new projects.



As with any architectural endeavor, full value is realized only when the business benefits are aligned and you start with good information about the current state.

Plan Your Benefits

We needed to be clear on the benefits that we were trying to achieve. For example, a project that is focused on cost reduction will be more aggressive in server consolidation than one that is focused on agility. In our experience, failure to clarify virtualization objectives leads to the project stopping before the full benefits are realized. Those projects tend to virtualize only the "easy" workloads and servers.

Understand the Current Environment

To plan for optimal virtualization, we needed solid information about the current (and planned) workloads. We also needed good information about the network, to avoid designing a solution that would create network congestion.

Create the New Server Standards

It was well worth the time and effort to establish the new server standards at the outset of the project, so as to support the required workloads and drive virtualization benefits.

Plan the New Service Levels

At the outset of both examples, we rationalized the service-level structure. By standardizing a small number of service-level offerings, we greatly simplified the analysis and created a simplified process of allocating resources for applications. The applications that had the most demanding service levels could be deployed in about the same time as any other application. The reduced cost of disaster recovery effectively reduced the service-level insurance premium—shifting the cost/benefit analysis—and made it practical for almost all  applications.

Start by Standardizing Development & Test Environments

Development and test environments are readily virtualized, but getting maximum value usually depends on standardizing the processes.

Plan Data-Center Savings

Significant savings were realized from reviewing planned data-center investments. We also needed to consider the new data-center demands of the virtualized environment to optimize cost reductions. In both cases, there were significant cost avoidance opportunities. We still had to avoid creating hot spots or congestion points.

Consider Security

Security must be carefully considered. Virtualization technologies have significantly improved, with regard to security; however, we found that the extra time that we spent in ensuring a secure virtualization strategy was well-invested.

Plan for Operations

Virtualization will drive significant operational changes. New tools and processes are required for managing a virtualized environment. We planned for the new tools and processes that were required to manage the new environment and realize the planned benefits. We also think it is important to plan, so that operations will have the tools and processes that are required to measure the benefits of the new environment.



Most of the benefits of virtualization are achieved in improved operations. The corollary is that all of the benefits that have been targeted can be offset by weak operations. Organizations that had weak processes quickly needed to drive improvements to manage new environments. Organizations that had strong processes yielded new benefits. This applied to organizations that pursued either strategy. Fortunately, many vendors are now providing tools that are aware of virtualization.

Configuration and change management became more demanding. Before virtualization, the number of new servers and configuration changes was restricted by the pace of physical changes. The ease of creating new virtual servers could have led to virtual-server sprawl. We needed better change-management processes.

Incident management was more demanding in the virtualized environment. Isolation of a performance problem and root-cause analyses were more complex. Solid incident management also relied on the improved configuration information.

Capacity management was less of an effort after virtualization. The consolidation of storage simplified management. Although this created fewer potential points for capacity issues, it also meant that more devices and services depended on the consolidated storage. Capacity required more diligence and monitoring. Understanding the overall capacity demands on CPU, memory, storage, and network was more complex after virtualization.

It is our experience that one of the most challenging changes was around network management. It was very important to monitor network activity after virtualization. Aggressive consolidation can create network congestion that is more challenging to diagnose in a virtualized environment. These changes dramatically changed network-traffic patterns. Ongoing monitoring, tuning, and management are absolutely required to maintain the health of the solution. The emerging technologies around network virtualization promise improvements, but they will be even more demanding on monitoring and management.


What's Next?

We are very interested in the emerging technologies around virtualization of the network adapters. This could significantly influence the server marketplace and virtualization implementations. In the preceding examples, the network is still one of the most challenging components to plan, implement, and manage in a virtualized environment.We are also seeing more virtualization to secure environments. Virtual-desktop technology readily lends itself to deploying secure standard environments. New, more efficient protocols could change potential deployment scenarios.Cloud computing is essentially a virtualization technique that we expect to become a regular part of virtualization planning. We expect combinations of cloud computing and on-premises virtualization to become a common pattern.



Our experience, based on a number of organizations and virtualization scenarios, has led to the following conclusions:

  • Most organizations have unexploited virtualization opportunities. Virtualization is applicable to a number of business settings and can drive a wide range of benefits.
  • With more planning, most organizations would derive more benefits from virtualization. They do not perform the planning that is required to maximize the benefits that are realized.
  • Most organizations do not provide the tools and processes for maximum operational benefits. We would not consider a strategy to be successful unless it demonstrated value over time. We have found that ongoing investment in the strategy depends on being able to measure the benefits that are identified in the planning phase.
  • Finally, we believe that emerging technologies will support even larger benefits from virtualization. New applications and infrastructure products are increasingly virtualization-certified. Emerging network-virtualization products promise more efficiency and manageable server virtualization. Desktop virtualization promises significant efficiency improvements. Security concerns are being addressed on an ongoing basis. Our next project will combine new cloud-based virtualization and the most current on-premises virtualization technologies.

As an architectural technique, virtualization is already very powerful. All signs indicate that new approaches and technologies will provide even more powerful virtualization cases.


About the Author

Greg Hutch's experience with virtualization spans a number of organizations. Currently, he is Director, Enterprise Architecture for Viterra (an international agri-business), where his responsibilities include enterprise architecture and strategic IT planning. Previously, Greg was Manager, Technical Development, at SaskPower; and Director, Technology Solutions, at Information Services Corporation; and he has worked in a range of consulting roles. Greg can be reached at