Newsletters




The Power of Virtualization in Changing Data Center Infrastructure


Today's world is defined by an overwhelming variety of technologies designed to simplify our lives. So much so, that they become an integral part of our daily routines. From the invention of the printing press, to the automobile, introduction of the computer and rise of the Internet, it’s easy to illustrate the constantly evolving landscape of technology.

Given the ubiquity of these inventions, it's incredible to think that there was a time when these innovations were not widely used.

Such is the case for data storage and its accompanying technologies.  A few years ago many were weary of adopting new data storage technologies; which may come as a shock considering the overwhelming demand for data today.

The explosion of virtual machines (VMs) has propelled the growth of data storage across all industries. Virtualization utilizes a software program to replicate the functions of physical hardware, offering new levels of functionality, flexibility and cost savings.

The Surge of Virtualization

The rapid popularity of virtualization can be attributed to its ability to let organizations to run considerably more applications on the same hardware, necessitating substantial levels of storage and restoring a need to focus on refined storage management, efficiency and flexibility.

A key benefit of virtualization is the efficiency it enables in the data center’s hardware. Traditionally, physical servers in a data center are frequently idling. By installing virtual servers inside the hardware itself, organizations are able to optimize the use of their hardware and CPU, a solution that makes ideal use of virtualization’s benefits.

Another significant benefit of virtualization is its ability to produce increased levels of flexibility. Infrastructure made up of mostly virtual machines, rather than physical ones, is much more useful. For example, if the organization wants to replace hardware, the data center administrator can easily migrate the virtual server to the newer hardware, achieving even greater performance levels at a fraction of the cost. Prior to virtual servers, administrators were forced to install the new server, then reinstall and migrate all the data stored on the old server. It is significantly easier to migrate a VM than it is to migrate a physical one.

Virtualization Makes Data Center Management More Efficient

Data centers with a significant number of servers – around 20 or higher – are starting to seriously consider turning these servers into VMs to achieve the cost savings and flexibility advantages described above. In addition, virtualizing  servers makes them much easier to manage. The physical challenge of administrating a large number of physical servers can become cumbersome for data center staff. Virtualization makes data center management substantially more efficient by permitting administrators to run the same number of servers on fewer physical machines.

Virtualization Places New Demands on Infrastructure

Despite the clear benefits of virtualization, the popularity of virtual servers is placing strain on traditional data center infrastructure and storage devices.

In a way, this problem is directly correlated to the popularity of VMs. The initial model of virtual machines made use of the local storage found within the physical server. That alone made it impossible for administrators to migrate a virtual machine in one physical server to a new one. The solution to achieve this was implementing shared storage to the VM hosts. The success of these solutions paved the way for the increased use of virtual machines, which has evolved into today's server virtualization landscape where all physical servers and VMs are unified.

The drawback to this approach? Data congestion.  During periods of high demand, and with a solitary entry point, data flow can become congested very quickly. Considering the popularity of VMs and the sheer volume of data is only projected to grow, it is evident that this approach to storage infrastructure must be enhanced. Data center infrastructure must be able to keep pace with data growth.

The Need for Solutions to Reduce the Impact of Virtualization on Data Centers

Early adopters of virtualized servers have already confronted this issue and have taken the initiative to develop solutions to reduce its impact. As other organizations integrate virtualization into their data centers, they will encounter the same problems.

The solution: Eliminating the single point of entry. In doing so, organizations will be able to maximize the benefits of virtualization while avoiding data congestion caused by traditional scale-out environments, all the while ensuring that storage architectures keep pace with their rate of VM usage. Today’s storage solutions inevitably have a solitary gateway that regulates the flow of data, creating a bottleneck when demand peaks. Organizations should utilize solutions with several data entry points and distribute the workload equally. This enables systems to maintain optimal performance while reducing lag time, even when accessed by multiple users.

Although this is the most straightforward solution, the next generation of storage architecture is proposing an alternative method.

The Next Generation of Storage Architecture 

In an effort to meet the storage challenge of scale-out virtual environments, the practice of operating VMs inside the storage node themselves – thereby turning those nodes into compute node – is quickly becoming the next generation of storage architecture.

To solve the data congestion difficulties this approach produces, many organizations are moving away from the traditional dual-layer architecture that has both storage and the virtual machines running out of the same layer and flattening the entire infrastructure.

A breakthrough in innovation often coincides with breakthrough in acceptance. Unlike other groundbreaking inventions, virtualization of infrastructure has skyrocketed in popularity and is not slowing down anytime soon. Undeniably, more and more companies will adopt virtualization and will run into similar problems that the early adopters faced. However, as a lesson from those before us, we should adhere to the guidelines outlined above. In doing so, organizations will be able to develop a successful scale-out virtual environment that optimizes performance and expenditures.

About the Author:

Stefan Bernbois the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise-scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators. 

Image courtesy of Shutterstock


Sponsors