How to Regain Control Over Self-Service Provisioning

Over the last few years, organizations have shifted from using virtual data centers to creating private or hybrid IaaS clouds that allow authorized users to perform self-service provisioning of virtual machines. These environments have reduced administrative workloads, improved the user experience, and discouraged shadow IT, but they have also brought their own challenges. As virtualized environments increase in scale, management techniques have often become far less effective, making it difficult to keep track of virtual machines, their owners, and why the virtual machines were created in the first place.

In order to effectively administer these environments, enterprise IT should think beyond hardware resource consumption, expanding to the general principles involved that impact all areas of enterprise IT, including administrative overhead and opportunity cost. To regain control over self-service provisioning, there are four main areas enterprise IT needs to focus on: understanding the economies of scale, implementing user controls, addressing scalability requirements, and handling virtual machine lifecycle management.

Economies of Self-Service Provisioning

With a financial cost to running every workload, virtualization has gained rapid popularity. This is due to its ability to dramatically reduce the cost of each individual workload by running multiple workloads at once while still maintaining operating system-level isolation boundaries between the workloads. Self-service provisioning looks to further improve these economies by “expensing” the workloads in an equitable way. When an organization transitions into a self-service, private cloud environment, it often means the enterprise IT team takes on the role of a service provider, making consumable services available to internal customers, or employees, for a price.

Enterprise IT teams do this either through chargebacks, a way of billing the cost of services to individual apartments that consume those services, or showbacks, an approach that makes individual departments aware of the cost of the resources they are consuming without actually billing them. It’s important to remember that enterprise IT is often under pressure to keep service costs down since it is essentially competing with the public cloud. If the showback model is chosen, there must be assurance that the available hardware and software resources are used as efficiently as possible.

The key to making a private cloud environment financially viable is to closely manage resources that minimize both overhead and waste. As a result, in order to regain control over self-service provisioning, it is necessary to focus on managing resource consumption, while also maintaining a high level of situational awareness about how resources are used.

Importance of User Controls

Similar to other virtualized environments, private clouds only have a finite quantity of hardware resources that must be shared. Since  running various workloads is associated with a tangible cost, it is not in an organization’s best interest to allow just anyone to deploy system resources on an as-needed basis. Permissions should be used to help broadly manage various aspects of the private cloud environment.

While a good permissions model is important, it is also imperative that the private cloud software can conform to an organization’s own rules of operation. For example, if the organization is using the chargeback model, each virtual machine created will impact a department’s budget and can increase the overall cost. In this case, it is more effective to give users the ability to request a virtual machine pending approval from a manager, rather than giving users blanket permission to create virtual machines at will. Enterprise IT should keep in mind, however, that not every private cloud management tool supports the use of permissions, and those that do might not always be able to fully match the organization’s requirements.

Another important tool for keeping users in check is a quota system, which can be used in various ways by self-service provisioning environments. While not every self-service provisioning environment supports the use of quotas, those that do generally use quotas for two main purposes: to control a user’s cumulative resource consumption and to control a department’s. This allows enterprise IT to set a collective resource consumption cap for all the users’ and departments’ virtual machines.

Quotas are often implemented at the department or tenant level as a budget control mechanism, but they can also be put in place by the IT department to prevent other departments from depleting a private cloud of its available resources. The enterprise IT department should ensure the vendor selected supports both resource-based quotes and cost-based quotas, so that the organization can be flexible with their changing needs.

Addressing Scalability Requirements

Although it may be tempting to think of regaining control over self-service provisioning purely with user controls, scalability and virtual machine lifecycle management are equally important. Let’s examine quota systems, for example. Quotas are an important tool to keep users in check, as they can be used to control both a user’s and a department’s cumulative resource consumption. While quotas are an excellent tool for preventing excess resource consumption, the IT department cannot restrict users or departments to the point that they lack the necessary resources to do their jobs. Otherwise, there is a chance that the users will begin using public cloud resources without the IT department’s knowledge or consent, often referred to as shadow IT. Shadow IT circumvents enterprise IT and creates a larger problem, so the IT staff is no longer able to perform backups of the resources. The uncontrolled use of resources can also violate regulatory requirements and subject the organization to hefty fines and penalties.

A best practice to prevent the use of shadow IT is to be accommodating of users’ requests for additional resources. This is often easier said than done, as organizations face various internal requests. For example, a situation may arise where a department needs more resources than the existing infrastructure can deliver. In this case, the private cloud software needs to be sufficiently flexible to provide a solution that doesn’t require major capital expenditure, but still allocates the required resources.

When an organization needs to add capacity, it is important for enterprise IT to consider the nature of the request, so a solution that is well-suited for the situation can be implemented. If a department has recently hired a large number of new employees, then adding server hardware as a way of accommodating those employees’ needs might be warranted. If, however, a department needs additional resources for a project that will only last a couple of months, then it may not make sense to invest in new hardware. Instead, it might be better to temporarily leverage the public cloud.

Keeping Up With Virtual Machine Lifecycle Management

Private cloud economics center on the efficient use and allocation of hardware resources, so it’s easy to see how unmanaged virtual machine sprawl can negatively impact the bottom line. Wasting resources creates an associated opportunity cost, as those resources are no longer available for use by other virtual machines. As a result, when an organization is regaining control over a self-service provisioning environment, it is critical to be able to eliminate waste and unnecessary sprawl to help the environment run smoothly.

Although the idea of reducing waste and virtual machine sprawl may appear deceptively simple, the process can actually be difficult. When looking to reduce waste and sprawl, there are two main things an organization should consider. First, enterprise IT must take measures to stop virtual machine sprawl from continuing in the future. Second, the IT department must go back and address any previously existing sprawl or waste, which is the more challenging of the two tasks.

Internal staff members are less likely to waste resources when they are being billed for those resources, so the implementation of the chargeback system can play a major role in preventing future virtual machine sprawl. However, a chargeback system, while helpful, will not solve the problem on its own. Enterprise IT must have an automated lifecycle management system periodically check to see whether or not the virtual machines are still being used. The system is tied to an automated process, but  IT administrators need to have insight into the process. This is because private cloud environments that support automated lifecycle management for virtual machines provide a reclamation report that allows administrators to see which resources have been reclaimed and which virtual machines have been removed.

With that report, IT administrators can locate virtual hard disks that are not associated with a virtual machine and run rightsizing reports, which analyze performance monitoring data for individual virtual machines and then recommend changes to resource allocations. Reclaiming these resources may make it possible for the organization to achieve a higher overall virtual machine density, in turn reducing costs.

Reducing the Cost of Self-Service

Although it may be tempting to think of regaining control over self-service provisioning in terms of policies and access control mechanisms, it is important to place an emphasis on economies to achieve success. By creating higher virtual machine density and focusing on user controls, scalability, and lifecycle management, the organization will reduce the cost of self-service provisioned workloads.


Subscribe to Big Data Quarterly E-Edition