Data Center Cost Management: Why Virtualization Requires a New Approach

Bookmark and Share

Managing and measuring costs has taken on a new urgency with the emergence of virtualization and new computing models. With virtualization, customers get a shared infrastructure that shifts the cost from a clear 1:1 relationship between servers, applications and users to a more dynamic model. We're just beginning to realize the tremendous impact this has on cost management and measurement in the data center. To make effective decisions about how to deploy resources, the business needs to clearly understand the associated costs.

Traditionally, cost management and measurement for distributed servers has been a relatively straightforward task. Each application is deployed on specific hardware, and the application lifecycle often maps to the hardware lifecycle. The costs of delivering a specific business service can be attributed directly to those servers and the associated components.

Shared components like networking and storage are either absorbed as a shared cost or usage is measured and reported for chargeback purposes. Shared services like email may be absorbed as a centralized cost. In this model, IT is often able to offload the costs of procuring hardware and software for specific applications to the business unit or department that required the service.

But in a shared resource environment, this means IT needs to take on a new role in measuring and managing costs. To effectively plan budget requirements and get transparency into how IT resource utilization relates to costs, IT needs the right tools and processes, designed with virtualization in mind.

Virtualization's Impact on Cost Management

Virtualization breaks the old models and can simplify cost measurement dramatically - if we know how to take advantage of virtualization's unique properties and manage costs in a shared environment.

Virtualization lets us consolidate applications on to fewer servers so that multiple applications share resources on the same server. However, these applications - encapsulated in virtual machines (VMs) - often have different resource requirements. One VM may need 4GB of memory, 500MHz of CPU and 50GB of storage, while another VM on the same server may need 8GB of memory, multiple CPUs and 150GB of storage. What's more, VMs can move between servers seamlessly and may grow or shrink in size. Measuring costs in this context means we must be able to break down the component costs for memory, CPU, storage and maybe even networking and allocate them with some reasonable accuracy to each VM.

Even in the standard one-application-to-a-server model of the 1990s, many organizations struggled with measuring costs in the data center, and as a result have shied away from implementing any form of chargeback or cost allocation to the business. It's easier to create a single budget line for IT and roll up all the costs to it, but this limits accountability for resource consumption and often leads to server sprawl in the data center.

In a virtualized data center, a lack of accurate cost measurement can have perilous consequences. The resources - servers, networking and storage - are often already there, so a department requesting a few virtual machines for a project or application doesn't need to submit a purchase order. This lack of accountability for using potentially expensive IT resources can lead to VM sprawl and drive higher costs for storage and servers in particular.

To address this, IT organizations need to at minimum be able to measure and report on the costs associated with VMs. Cost measurement must account for: hardware costs and depreciation; redundant architectures for high availability; services like backup, high availability, storage replication or disaster recovery; and fixed costs like software, floor space or support overhead. These costs need to be effectively allocated based on either the sizing of a VM or the actual consumption of resources, so we need to find some way of assigning a cost per computing resource unit - per GHZ of CPU, per GB of memory and storage, and potentially per GB of network and storage I/O.

Getting From Cost Chaos to an Effective "Showback" Model

Identifying these costs can seem daunting. Even though we certainly want to get to a point where we can measure and allocate costs on a per VM basis, there doesn't seem to be an easy way to come up with the right information.

The question then is where to start? There are two things to think about before you start digging out purchase orders for the servers:

1. Map out how you want to measure costs. What components and services will you measure costs for, and what will you assume is a sunk cost? If you're just getting started, keep it simple - focus on an allocation model with a set of fixed costs. This will make it easier to come up with an initial set of cost data. Worry about services like HA or backup later. Some costs, like networking, floor space and power/cooling may already be captured by other teams.

2. Decide how you will structure cost measurement. Decide how to group your VMs - whether by department, application, function (development vs. production) or some other method.

The next step is determining resource and per VM costs. The first place to look is your existing purchasing data on your servers and storage that are used in your virtual environment. Information such as the ROI/TCO analysis from your virtualization project may also prove useful. In approaching this, customers have had the most success when they:

  • Identify costs within each cluster of servers, which typically have similar hardware configurations.
  • Account for depreciation. Your finance department will often depreciate the cost of servers over several years, so it makes sense to spread costs for the hardware over the same period of time to get "per unit/hour" costs.
  • Allocate a certain percentage of costs to memory, and a certain amount to CPU - usually, CPU and the system architecture makes up the majority of the cost of a server. This allocation doesn't need to be exact, but will give you a "CPU GHz/hour" and "GB of memory/hour" number to use for cost measurement.
  • Work with the storage team to identify storage costs per GB used for different types. You'll want to amortize this cost so that it can be expressed in one hour increments.

Managing the Cost Data: Moving Beyond Excel

Once you've done this, you can begin measuring allocation - or actual utilization - and begin reporting on costs per VM. You'll need a tool to do this - likely something more than Excel and some scripts. IT professionals that have a focus on cost measurement today often use a homegrown Excel spreadsheet. A "showback" or chargeback tool should be designed with virtualization in mind and let you account for HA configurations, snapshot utilization, live migration and other unique characteristics of virtualization. Look for tools that:

  • Let you create and manage multiple cost models that can account for actual usage, allocated resources, or customized fixed costs.
  • Track resource utilization of all components by virtual machine, including CPU, memory, storage, storage I/O and network I/O.
  • Can easily map to your organization's structure and apply costs at the department or group level.
  • Provide a robust reporting capability that lets you measure, analyze and share cost information with the rest of the organization.

This will help with budget cycles and identify more costly virtual machines. It can also give you the ammunition you need to help convince users to scale back VMs that may be over-provisioned. VMware and several others offer tools to help you with cost reporting and chargeback.

Better Resource Utilization with Effective Chargeback

By putting together some basic cost metrics, you can measure costs in a virtual environment and begin using the data to drive cost visibility and accountability in the datacenter, creating a "showback" model, setting the stage for an effective chargeback model in the future.

The data and analysis you gather and analyze on costs will help you down the road. You'll be able to effectively manage expectations for higher and higher service levels from IT, showing the business how the potential cost impact of different options. When you go in to budget meetings, you'll be able to quickly show where resources are over-utilized and how much it costs, and you'll able to make the case for resources - servers, storage, etc.  -  when you need them.

About the author:

David Friedlander is senior product marketing manager of VMware, which delivers solutions for business infrastructure virtualization that enable IT organizations to energize businesses of all sizes. With 2009 revenues of $2 billion, more than 170,000 customers and 25,000 partners, VMware is a leader in virtualization which consistently ranks as a top priority among CIOs. VMware is headquartered in Silicon Valley with offices throughout the world and can be found online at