Newsletters




Setting Data Fabrics Up for Success with TimeXtender, InterSystems, and Fivetran


Instituting data fabric architecture—defined by a smart, unified web of information that supports real-time access, governance, and agility—is becoming an increasingly popular choice for modernizing data infrastructures. Despite its promise of centralized, streamlined data accessibility, the challenge of selecting the right tools and frameworks makes successful implementation a bit more precarious.

DBTA’s webinar, Getting Data Fabric Right: Key Tools and Best Practices, featured the opinions  of data fabric experts as they examined core components of an effective data fabric, as well as must-have tools, best practices, and common pitfalls.

Micah Horner, product marketing manager, TimeXtender, highlighted the seven promises of data fabric:

  1. Holistic Data Integration: Integrates all disparate data sources.
  2. Unified Data Access: Offers a single access point for all data.
  3. Active Metadata Management: Automatically captures and leverages metadata.
  4. Data Automation: Automates data processes to reduce manual effort.
  5. Flexible Deployment: Works seamlessly across cloud, on-prem, and hybrid environments.
  6. End-to-End Orchestration: Manages complex data workflows from start to finish.
  7. Data Governance and Security: Embeds robust controls for security, quality, and compliance.

Though brimming with possibility, why is it so hard for data fabrics to succeed? Horner pointed to the fact that modern data stacks often consist of disjointed tools and ad-hoc pipelines, lending itself to massive integration challenges amplified by a lack of a metadata standard. Additionally, hand-coded data pipelines, static metadata management, and a reliance on generative AI to generate (often inaccurate) code further impedes data fabric success.

A better approach to data fabric, according to Horner, follows three foundational principles:

  1. Metadata-Driven: An approach where metadata is the foundation for all data management and automation. It acts as the "connective thread" that weaves the entire data fabric together and must be built on active metadata that is constantly updated throughout the data lifecycle.
  2. Automation-First: This is the engine that drives efficiency. It automates the entire data lifecycle—including code generation and end-to-end orchestration—using a reliable, deterministic approach that ensures speed and consistency.
  3. Zero-Access: All data processes should be orchestrated using metadata, rather than requiring direct access to your actual data. This architectural choice provides key benefits, such as enhanced security and compliance, true portability that avoids vendor lock-in, and embedded governance where rules are consistently applied everywhere.

Jeff Fried, director, platform strategy, InterSystems, echoed Horner, further defining data fabric as an “architectural pattern for common governance over distributed data…used to connect data wherever it lies.” Getting that governance right, Fried highlighted, is foundational for data fabric to succeed.

Within a data fabric is a multitude of technological pieces, used to integrate data and its sources, manage ingestion, ensure connectivity, implement metadata, lineage, and governance, and more. The InterSystems approach features pre-integrations that simplify that tech stack, centralized business users, data scientists, reporting and analytics tools, and applications into one place. Forwarding a “smart” data fabric, InterSystems’ framework emphasizes common governance, a single source of truth, low-code application, and a generative AI core.

InterSystems’ Data Fabric Studio is a drag-and-drop experience for fielding a data fabric, explained Fried. Seamlessly connecting data sources to data users, the Data Fabric Studio integrates, governs, and persists data logic and metadata to best prop up an effective data fabric.

Casey Karst, principal product manager, data lakes, Fivetran, explained that modern data fabrics demand a storage foundation that balances flexibility, governance, and performance while maintaining a low TCO. He further argued that data lakes are the “basis for this interoperable storage layer of the future,” where improved data lake management and maintenance are reviving the data lake by:

  • Addressing management and compliance challenges by introducing open table formats
  • Offering scalable and affordable storage for large data volumes
  • Interoperability to work alongside data warehouses, catalogs, and other modern data stack technologies

With traditional data stacks being both rigid and expensive, Fivetran brings data warehouse functionality to data lakes, establishing a universal storage layer for effective data fabric architecture implementation. Merging everything valued about a data warehouse—query-ready data and ACID compliant data consistency—and data lakes—flexible data type support (structured and unstructured) and cost effectiveness—Fivetran’s managed data lake service:

  • Normalizes, deduplicates, and securely lands data into your data lake to create a compliant universal storage layer and increase discoverability
  • Converts data into standardized formats (Iceberg/Delta Lake) to reduce vendor lock-in and provide ready-to-use data to power analytics and AI/ML
  • Covers the ingestion cost, allowing your business to innovate without going over budget

This is only a snippet of the full Getting Data Fabric Right: Key Tools and Best Practices webinar. For the full webinar, featuring more detailed explanations, a Q&A, and more, you can view an archived version of the webinar here.


Sponsors