Why Software-Defined Should be Driving Your Cloud Strategy

Software-defined technologies and software-defined data centers (SDDCs) are generating significant traction in the IT community. The concept of “software-defined” is becoming increasingly pervasive as organizations across the globe look to modify their compute, storage and networking infrastructures.

The Truth About Software-Defined

The majority of organizations see the potential operational benefits to be had by software-defining one’s IT infrastructure. What most do not realize, however, is that SDDC is not achieved by simply bolting together virtualization, software-defined networking (SDN), and software-defined storage (SDS). These components are important, but the true SDDC is not something that can be bought off the shelf. It is an operational state attainable only by adopting a new way of managing and controlling the moving parts within the overall infrastructure, whether they are software-defined or not.

Implementing technologies such as software-defined networks enable entirely new operational models, but looking at them in isolation, rather than the greater role they have to play in private cloud initiatives severely limits their value. A recent report by Forrester analyst Lauren Nelson highlighted the fact that the majority of private cloud initiatives fall short of three core characteristics: self-service access; tracking and monitoring of resources; and full automation.

The challenge is that the self-service and automation goals of a cloud team focus almost exclusively on the provisioning process, and the tracking and monitoring is largely seen as a path to implementing chargeback. All of these areas can benefit from software-defined infrastructure.

Understanding Supply and Demand

Most organizations pursuing private clouds have managed to establish a cloud catalog and a self-service portal. Unfortunately these two elements add little business value unless they are connected and leveraged in a larger process. Automating the provisioning of individual self-service requests may be beneficial to certain types of users, such as developers, but it is just one link in a bigger chain for most use cases. For enterprise workloads, the release management process involves many steps, and can start well in advance of the go-live date. In this case, the ability to use a web portal to turn on a VM of a certain size is not overly helpful. What is needed is the ability to look at the aggregate demand of that application, in combination with all other current and future demands, and ensure the needed resources will be available when they are required.

Consider a large conference facility – the kind frequently used in this industry for tradeshows and conferences with moveable walls and infinitely configurable potential layouts. If you don’t know what kinds of events are to take place in a given week (i.e., industry trade show, wedding, etc.), or how many will be attending each event, you can’t possibly know how big to make individual rooms, what kinds of equipment will be required, or services needed such as electrical, AV, catering, etc. The demand is unknown, making it very difficult to adequately prepare for and optimize your environment, and impossible to assure customers that their needs will be met.

Organizations need a complete view of upcoming demand, both confirmed and likely, so they can determine the most efficient way to provide infrastructures that satisfy it. By not understanding the demand, users may end up with some very powerful, expensive infrastructures that are not fully leveraged because they cannot be aligned with the demands of the application. As a result, the infrastructure will be over-supplied, underutilized, or not properly configured.

This intelligent alignment of supply and demand is where the software-defined data center becomes greater than the sum of its software-defined parts. With a deeper understanding of demand, organizations can begin to use software-defined controls to configure, specify and match infrastructure supply to meet extensive requirements. This provides organizations with the flexibility and agility they are looking for, but requires a level of control than has been necessary in the past.

With Great Flexibility Comes Great Responsibility

The more flexible something becomes, often the more difficult it is to use. In many ways making the infrastructure programmable or “software-defined” greatly increases the chances that something will go dramatically wrong. It’s like driving a very simple car compared to sitting in the cockpit of a jet fighter and trying to figure out how to fly it. The latest generation of fighter jets and bombers are actually unstable—very, very agile, but they won’t fly unless there is a control system actively in charge of them. They have the computer constantly adjusting, and that’s the price you pay for the new level of agility.

In the case of software-defined technologies, they too can create an unmanageable environment if they are not properly connected and centrally controlled. As Gartner notes, there is a requirement for a control plane.

Policy Key to Automated Control

Operating environments through intelligent policies is key to being able to control complexity. Using software to codify policies that describe how an environment should operate enables an organization to harness flexibility without the risk. But in order to do that they also must be able to model and profile current and future demand, which enables the tuning of the capabilities of infrastructure supply to meet its requirements. By understanding the purpose of a workload, its operational patterns, its affinities and anti-affinities, technical requirements, and resource requirements across compute, storage, network and software resources, organizations can accurately:

  • Know exactly what the infrastructure should look like (i.e., be fit for purpose)
  • Know exactly how much infrastructure is required (now and into the future)
  • Know exactly where workloads should go, and how resources should be allocated to them

IT teams operating and building private clouds need to look for a control plane that can answer such questions using intelligent analytics, not spreadsheet and best guesses. Without having the precision and speed that come from this approach, it is not possible to achieve automation and control the balance of supply and demand within a private cloud.

Cloud Management Challenges with Supply and Demand

Many people assume that cloud management platforms will provide these capabilities, but they are expecting too much. These platforms generally lack the policy-based analytics, demand management and supply optimization capabilities to enable software-defined control of environments. They instead focus on automating the provisioning process, and while many do quite well, it is a very narrow view of supply and demand. They will not optimize the density of the infrastructure, and will not automatically align the requirements of the applications with the capabilities of the infrastructure. They also will not attempt to predict future requirements, making procurement even less of a science than it is today.

Silos in Communication

There are also challenges from an organizational perspective. Political tensions often exist when existing operational silos are threatened. For example, one group may be implementing software-defined networking while another is implementing software-defined storage, and neither wants to cede control to a new group that is trying to unify these concepts. To combat this, some companies are forming covert or greenfield cloud projects run by special teams to side-step the political tensions and have the freedom to innovate. Regardless of approach, IT teams needs to analyze, as a singular group, all the layers involved to come up with a unified strategy that links all elements of infrastructure supply with demand.

Still Early, but Approaches are Maturing

Organizations are early in the journey to the software-defined data center, but the leaders in the industry are already maturing their approaches. From a technology perspective, intelligently linking the cloud provisioning process with software-defined infrastructure is key, and policy-based control over supply and demand is the path to get there. From an organizational perspective, the various entrenched technology owners are starting to see the need to come together and collaborate to create mutually beneficial infrastructures, based on a common desire to elevate their organizations above their own self interests.

And organizations are finding that they should not focus on supply side software technology without considering the application demands that drive it. Nor should they fixate on self-service without thinking about how it is part of a bigger demand pipeline, and how that pipeline will ultimately impact their compute, storage, network and software resources. If you link these two concepts together, organizations are in a strong position to lower their operational risk, create more efficient use of hardware capacity, and be far more responsive to business needs.