Newsletters




An Antidote To Crumbling IT Infrastructures


Ask a city engineer what a crumbling infrastructure looks like and chances are you’ll hear about roads, bridges, and buildings that have aged past their prime. Ask an IT manager or CIO and you will probably get a different answer, one that centers on the inadequacies of applications, databases, and networks to keep up with the pressures of moving growing volumes of data in today’s ever-shortening timeframes.

In fact, an organization’s ability to move data effectively is an excellent indicator of the health of that organization’s IT infrastructure. Moving data effectively means it gets to the right place—to the customer-service and new-product analysts; to builders of new products and applications —without becoming mired in time-consuming bottlenecks.

On paper, your infrastructure may appear adequate, with varieties of applications exchanging data with ERP/CRM systems and data stores. You’ve introduced cloud-based solutions to business units, and you’re migrating more applications to remote devices.

But data is not moving as it should. Approval workflows and other business processes are taking forever. Critical supply-chain events are getting lost in ERP systems.  Employees are still copying documents to hard drives and then mailing—or even FedExing—them to ecosystem partners.

Looking for Slowdowns

The first place to look for data congestion is at the edge of the enterprise, where databases are indexing and normalizing incoming structured and unstructured data. Building terabyte- and petabyte-size indexes can take time, and the coming reams of IoT data will add exponentially to the burden of indexing, normalizing and moving that data through the enterprise.

Once inside the organization’s storage environment, that data then faces the age-old silo problem. The landscapes are new—data lakes and clouds have replaced dumb terminals and proprietary operating systems—but the fundamental problem is the same. Handoffs must take place between applications, ERP/CRM systems, databases and data warehouses, and many of these require changes in data formats in order to function correctly.

The third problem is that of control. Once inside the organization, data is typically replicated for use by different business units. The data streamed at the edge now grows in multiples, driving up costs—whether in storage or cloud fees—and is ever-changing in nature. This causes management reports to deliver inconsistent results., and business units to sometimes work at cross-purposes. There are times when newly-independent business units should look to IT for help, and this is one of them.  The concept of self-service is valid, but IT still needs to maintain centralized control for company-wide functions such as compliance, authentication and standardization of data definitions.

There’s no easy answer to the dilemma of infrastructure decline. But the obvious response, to attack the problems in piecemeal fashion, clearly doesn’t work. That’s what organizations have been doing all along: bringing in faster processors, adding databases, deploying more clouds. These may offer temporary relief, but eventually they’ll just add to data complexity and congestion.

A better solution is to take a higher-level view of the goal, rather than the components, of technology. That goal: Getting the right data to the right person in the shortest possible timeframe. It shouldn’t matter where the data is or who “owns” it.

Rethinking Technology

It’s well known that the best application designs start with the user experience, then work down to the bits, bytes and components to make that view happen. The technology is now in place to support that same practice, and to provide for introducing an operations layer to the IT infrastructure.

Data platforms, now appearing from a range of vendors, bear a close look from any organization struggling with an outdated or overly complex IT infrastructure. The best data platforms are cloud based and, thanks to today’s abundance of high-performance underpinnings, fully capable of handling ultra-high volumes of streaming IoT data.

They’re compatible with industry standard databases, warehouses, modeling and application development tools. And they’re capable of giving non-technical business analysts immediate insights into the data, and the trends, they want to see.

The better platforms also have ways, hidden beneath the surface, of speeding up indexing and other time-wasting techniques, and of generally making the IT infrastructure visible at every level of use, from developers up to C-levels.

Importantly, they’re compatible with all cloud types, from SaaS and PaaS to public, private and hybrid, and even with one another.

They do require a change in perspective, though, from upper management. That’s because bringing in a data platform requires an organization-wide commitment to change, and change can be difficult even for the most aggressive management teams. It’s easier to allocate budget in bits and pieces, from new hardware here to another cloud there. But that’s what brought trouble in the first place. So management first has to understand that a major degree of change is worth the time and effort. Then business units and IT will follow suit.

Beyond commitment, an operations layer will require the time of an IT-savvy team, perhaps led by the CIO or CFO, to review industry platform offerings and then to determine where best to begin with a pilot program. And throughout, they’ll want to maintain a close focus on KPIs that show that data is moving quickly and effectively through the company’s business processes.

It won’t happen overnight, of course, but a solid commitment to an operations platform brings a worthy benefit: It can finally put to bed the tail-chasing frustrations—and costs—of adding so-called point solutions to what is in fact a very large infrastructure problem.


Sponsors