Weaving Data Fabrics: Adapting to the Needs of Data

Data wants to spread, and similarly, it wants to grow; Andrew Brust, research director at Gigaom, and Dan Potter, VP of product marketing at Qlik, made this statement during DBTA’s webinar, “The Business Case for Data Fabric,” further explaining that data requires going with the grain, not attempting to “fix” the nature of its existence.

They began by defining data fabric, the “buzz word” of today’s tech discourse: a set of data services that provide capabilities across endpoints spanning hybrid/multi-cloud environments, combining the tools and practices of data management which creates a service that makes applications easy to build and maintain.

“It’s as much aspirational as it is architectural,” said Potter in regard to data fabric. “Organizations should aspire to have a solid foundation to enable them to embrace a wide variety of different clouds and architectures to better serve the business users. To get them there, some of these foundational capabilities include things like a rich set of metadata across all owned data—it’s about making it available in real-time so that analytics users and applications are running on the latest and greatest data.”

What prompted the rise of the data fabric is the current landscape of data and data technologies, being that of heterogeneous tools, platforms, storage, and locations. Data is scattered and in multiple locations, increasing in complexity due to cloud and hybrid-cloud environments, as well as the production of innumerable solutions on the market today.

How does the fabric of data weave? According to Brust, it’s across on premises and cloud; throughout cloud-hosted enterprise apps and APIs; between database and data lake; among business partners; and between operational and analytical stores. He further emphasized data’s multi-dimensionality as the critical detail that enterprises need to accommodate for.

Potter additionally said, “you have a lot of different consumers of this data once you've integrated it, and they have different surface-level expectations, too. You may have an operational team that needs to make decisions in seconds, and they need more real-time data, versus someone in finance who may be doing weekly or monthly kinds of reporting. You need to be able to support different latencies and different formats that are stored in different management layers; that’s why the notion of the fabric is so appealing.”

Brust and Potter highlighted the data integration strategy, which alleviates the coding workload and adapts to the needs of continually changing data. Differing code needs impact transformation and ingestion, as well as in every step in your process. This is the business of data integration, Brust pointed out.

The strategy for data integration includes:

  • Use traditional integration where feasible and where query performance is critical.
  • Use the appropriate data integration approach.
  • Have an authoritative and accessible data catalog.
  • Consider the role of automation and augmentation, through AI and ML.
  • Data integration strategy and data governance policy go hand in hand.

The benefits from these strategies are abundant, according to the speakers. It can reduce data engineering and IT FTE workload by 25%, further being able to save 65-70% of data discovery, analysis, and implementation time. Gigaom’s strategy can see exponential value, increasing efficiency in data management by 20% in year one, and 33% in year three, as well as increasing data integration governance benefits by 60% in year two, and 85% by year three.

As an integration vendor, Potter discussed Qlik’s ability to provide flexibility and scalability in driving digital transformation. The Qlik Cloud Data Integration Platform allows organizations with disparate data sources to move data in real-time with CDC, additionally automating the creation of warehouses and lakes. As a powerful data integration fabric, Qlik aids users in delivering, transforming, and unifying enterprise data with flexible, governed, and reusable data pipelines, Potter concluded.

To learn more about data fabrics, as well as Gigaom and Qlik’s strategies, you can view an archived version of the webinar here