Steps for Successful Cultivation and Management of a Data Fabric Architecture

A hot topic in the world of data-centric enterprises is undoubtedly the implementation of data fabrics. Along with its popularity comes certain confusion; while everyone wants to maximize their data value with increased accessibility, creating a successful data fabric—and managing it accordingly—is an arduous journey.

Experts joined DBTA’s webinar, “Implementing and Managing a Data Fabric: Steps to Success,” to aid both IT decision makers and data professionals in cultivating an optimized and manageable data fabric with a variety of strategies and best practices to do so.

Sam Chance, principal consultant at Cambridge Semantics, launched the conversation by asserting that not all data fabrics are created equal; rather, evaluating an enterprise’s unique needs and problems will provide a sturdy foundation for implementing a data fabric. As Chance explained, the underlying model is the key to success, not the implementation strategy.

He further argued that knowledge graph technology can provide that critical and successful underlying model for a data fabric’s frictionless access to information on demand. Knowledge graph technology can drive adaptability in the face of uncertain, complex, and changing data, where intelligent metadata and business-oriented strategies can support a robust and effective data fabric.

Outside of graph technology, Chance offered the following steps for successful data fabric implementation:

  • Get educated about data fabric, then evangelize
  • Focus on a particular problem that others also care about
  • Assemble a team with low or no barrier of entry
  • Treat your data as a product, where new datasets can be reused

Tom Hoblitzell, VP of data management at Datavail, outlined the benefits, best practices, key elements, and use cases that comprise and drive the need for a data fabric architecture.

Its benefits, he argued, are plentiful; most notably, data fabrics can consolidate data from multiple sources onto a single platform, process and analyze data in real-time, enable data access from anywhere, at any time, and allow for data processing and analysis capabilities to be scaled as needed.

These data fabric benefits are only accessible if certain key elements are in place to support positive outcomes, according to Hoblitzell. He argued that augmented knowledge graphs, intelligent integration, self-service data usage, unified data lifecycles, multimodal governance, and being AI and hybrid-cloud ready are critical toward implementing an optimized data fabric.

Hoblitzell additionally listed the following best practices for managing data fabrics:

  • Set up a data marketplace for citizen developers
  • Deploy graph-based analytics to find correlations
  • Understand your compliance and regulatory requirements
  • Proactively avoid building just another data lake
  • Embrace a DataOps process model

Kevin Bohan, director of product marketing at Denodo, centered his portion of the data fabric implementation discussion on skipping the trough of disillusionment phase, where inflated expectations driven by market hype taints the effective utilization of a data fabric architecture.

While the value of a data fabric architecture is very real—citing how data fabric implementation enabled the company Sunbelt Rental to see a 63% boost in speed—ensuring the right capabilities are in place will determine which fabrics are successful and which see failure due to hype-induced rushes for a data value solution.

Bohan listed the following capabilities as crucial toward cultivating an effective and manageable data fabric:

  • A semantic layer, which allows an enterprise to present data to consumers in the shape, format, and structure that they can easily understand.
  • Active metadata, which can be used to either automate data management tasks or provide insights and/or recommendations to optimize operations.
  • An augmented data catalog, which provides a centralized, shopping-like experience for users to search an inventory of data assets across the organization.
  • A recommendations engine, which leverages active metadata to apply AI and ML routines, enabling recommendations of optimizations in data integration and data delivery.

Ultimately, Bohan emphasized the need to set expectations and show progress while building a data fabric architecture. Clearly defining and communicating to all stakeholders the objectives of the data fabric upfront maximize the architecture’s future potential, making clear that a data fabric’s value improves over time as more metadata is available and more data assets are involved.

For an in-depth discussion and review of data fabric implementation strategies, you can view an archived version of the webinar here.