Newsletters




DBA Corner: The Challenge of Managing the Modern IT Infrastructure


It can be challenging for IT architects and executives to keep up with today’s modern IT infrastructure. Homogeneous systems, common in the early days of computing, are almost non-existent today in the age of heterogeneous systems. It is de rigueur for Linux, UNIX, and Windows servers to be deployed throughout a modern IT infrastructure. And for larger shops, add in mainframes, too.

When multiple types of servers are deployed that impacts everything else. Different operating systems, software, and services are required for each type of server. Data, quite frequently, must be shared between the disparate applications running on different servers, which requires additional software, networking, and services to be deployed.

And technology is always changing – hopefully advancing – but definitely different than it was even just a year ago. For example, most organizations use multiple database systems from multiple different providers. Just a decade ago it was a safe bet that most of them were SQL/relational, but with big data and mobile requirements many NoSQL database systems are being deployed. And every NoSQL DBMS is different from every other NoSQL DBMS. And let’s not forget Hadoop, which is not a DBMS but can be used as a data persistence layer for unstructured data of all types and is frequently used to deploy data lakes.

Additionally, consider the impact of cloud computing – storing and accessing data and programs over the internet instead of your own servers. As organizations adopt cloud strategies components of their IT infrastructure move. What used to reside in-house, now requires a combination of on-premises and external computing resources. This can pose a management challenge.

Application delivery has changed significantly as well. Agile development methodologies combined with continuous delivery and DevOps enable programmers to produce software in short cycles, with quicker turnaround times. With microservices and APIs, software components developed by independent teams can be combined to interact and deliver business service more reliably and quicker than with traditional methodologies. This means that not just the procured components of your IT infrastructure are changing, but your in-house developed applications are changing rapidly, too.

And there is no end in sight as technology marches forward and your IT infrastructure adopts new and useful components. The result? A modern, but more complex environment that is more difficult to understand, track, and manage. Nevertheless, few would dispute that it is imperative to keep up with modern developments to ensure that your company is achieving the best possible return on its IT investment.

Keeping track of it all can be daunting. It is easy to overlook systems and components of your infrastructure as you work to understand and manage the cost and value of your technology assets. And, you cannot accurately understand the cost of your IT infrastructure, let alone be sure that you are protecting and optimizing it appropriately, if you do not know everything that you are using. In other words, without transparency there is anarchy and confusion … and higher costs.

IT transparency is an elusive, yet necessary goal. To achieve it, IT must be run like a business, instead of as a cost center. It is often the case that senior executives view IT as a black box; they know it requires capital outlays but have no solid understanding as to where the money goes or how expenditures enable IT to deliver business value. On the other hand, it is not uncommon for senior IT managers to look at company expectations as unrealistic given budget and human resource constraints.

The problem is that there has been no automated, accurate method for managing and providing financial visibility into IT activities. But a new category of software is emerging that delivers cost transparency for IT organizations. The software offers automatic discovery of IT assets with the ability to provide cost details for each asset. By applying analytics to the IT infrastructure and cost data, the software can offer a clear picture of the cost of providing applications and services to your enterprise. This useful insight enables CIOs and other IT managers to make faster, fact-based decisions about provisioning and purchases.

But IT cost transparency is not just a solution for improved communication. By using it to model and track the total cost to deliver and maintain IT software and services, better decisions can be made. For example, components like servers and storage arrays are frequently deployed with more power or capacity than is needed. Such, over-provisioning – whether it is CPU, memory, storage or any other IT asset – costs money and wastes resources. Over-provisioning is a problem for both mainframe and distributed systems. With mainframes you may have more  MSU (million service units) DASD (direct access storage device) capacity than you currently need. Both can be costly in terms of software licensing fees, administration, and cost to manage. For distributed systems you may have many servers that are not running at peak capacity, meaning hardware that you paid for but are not using. And again, there is administration and management costs to factor in.

With an accurate view of what is being used, how it is being used, and what it costs, it becomes possible to provision capacity as needed, thereby reducing costs.


Sponsors