Newsletters




Bridging the Data Divide: Getting the Most Value From Data With Integration

Page 1 of 4 next >>

The need for data integration has never been more intense than it has been recently. The Internet of Things and its muscular sibling, the Industrial Internet of Things, are now being embraced as a way to better understand the status and working order of products, services, partners, and customers. Mobile technology is ubiquitous, pouring in a treasure trove of geolocation and usage data. Analytics has become the only way to compete, and with it comes a need for terabytes—and gigabytes—worth of data. The organization of 2016, in essence, has become a data machine, with an insatiable appetite for all the data that can be ingested.

For many enterprises, however, the road to data-driven nirvana is stymied by the inflexible, calcified systems and processes that were laid out decades earlier and still control the data flow within many enterprises. Everybody is scrambling to attempt to keep up.

Within this new context, data integration needs to be rethought. The traditional methods and technologies that have worked for years may no longer be effective. “With the rise in IoT information and cloud-based enterprise applications, enterprise data integration is in the midst of one of the greatest transformations in the past 2 decades,” observed Mark Palmer, senior vice president and general manager of engineering at TIBCO Software. This new integration context “brings new and complex data integration challenges, and the infrastructure required to deal with it is not delivering all it can today,” he added.

The vast amount of data being created, combined with demand for real-time access, “adds new stresses to the data management function,” agreed Ian Rowlands, vice president of product management for metadata at ASG Software Solutions. “This is driving disruptive changes in data management, and some of those changes are pervasive, as by-product regulatory demands are forcing work on data management teams that may already be scrambling to keep up.”

In an economic environment that demands real-time insights, information is still moving too slowly. Enterprises are weighed down by inadequate performance, siloed data, and slow response times. A recent survey of more than 300 data managers and professionals conducted by Unisphere Research, a division of Information Today, Inc., publisher of DBTA, finds that the inability to get at needed information is a major inhibitor to decision making in today’s organizations. A majority of managers and professionals say they encounter a lack of complete information, as well as delays in getting the information they need (“Moving Data at the Speed of Business: 2016 IOUG Survey on Data Delivery Strategies,” February 2016).

The time is ripe for comprehensive enterprise data integration. “Data integration can help companies to liberate data, produce value from it, and operationalize insights to drive change throughout the organization,” said Jean-Luc Chatelain, managing director of Accenture Analytics. “The integration process can also reduce the amount of disparate and inaccurate data, generating better operational efficiencies by lowering costs associated with data management.”


In an economic environment that demands real-time insights, information is still moving too slowly.


Is enterprise data integration, as it stands today, delivering all it can? “Not even close,” said Sumit Nijhawan, CEO and president of Infogix. He observed that, in many cases, most companies estimate they’re analyzing only a fraction of the data they have, with some estimates as low as 12% of total data assets.

Even if data is captured and stored, it’s not a certainty that it is of value. “Today, 60% to 80% of the time for major integration projects is spent just wrangling data versus generating value from that data,” said David Gorbet, vice president of engineering at MarkLogic. “That’s because most data integration projects are still using legacy relational technology to bring data together, and that technology requires you to know all the data you’re going to integrate and all the queries you’re going to make on that data so that you can design a usually very complex relational schema. It then requires that you map all your data sources to that schema and write ETL jobs to transform the data. Many projects fail because they can’t even get that far, but for those that succeed, evolving this system to support new data or new use cases proves impossible—they’ve just succeeded in creating yet another data silo in their organization.”

Page 1 of 4 next >>

Related Articles

The next major release of MarkLogic's enterprise NoSQL database platform is expected to be generally available by the end of this year. Gary Bloom, president and CEO of the company, recently reflected on the changing database market and how new features in MarkLogic 9 address evolving requirements for data management in a big data world. "For the first time in years, the industry is going through a generational shift of database technology - and it is a pretty material shift," observed Bloom.

Posted June 30, 2016

Sponsors