Newsletters




Data Observability is the Key to Ensuring Fresh and Reliable Data Pipelines


Dealing with data and databases is laden with a multitude of challenges, often characterized by the questions, “What happened to my data?” and “Why is this data all wrong?” Whether data is stale or unreliable, the solution lies within the operationalization of data observability—the data monitoring method that understands the health of data at each stage in the pipeline.

Glen Willis, solutions architect at Monte Carlo, joined DBTA’s webinar, Operationalizing Data Observability: Best Practices and Critical Strategies, to explore how data observability can be implemented—with best practices, strategies, and tools—to keep data fresh, reliable, and efficient.

Willis opened the conversation up with a few telling statistics; according to Monte Calo research and customer-reported benchmarks, 30-50% of organizations reported that data engineering time is spent on data quality issues. Furthermore, 80% of data science and analytics teams’ time is spent on collecting, cleaning, and preparing data, according to a report from Crowdflower.

These statistics emphasize a drastic data challenge that many enterprises face today: data quality and retroactive data handling. With days to weeks passing before data quality incidents are detected and resolved—aka, “Why is the data all wrong?”—Willis argued that detection, resolution, and schema management workflows are the main aggressors impeding data quality through data downtime.

Fortunately for the data industry, data downtime looks similar at all companies, Willis explained. This means that a proper solution can target data downtime and resolve this massive resource drain for any organization. Enter Monte Carlo’s Data Observability Platform, a monitoring platform that proactively detects, resolves, and prevents data quality issues before it makes a negative impact.

Driven by Monte Carlo’s 5 pillars of data observability—freshness, volume, quality, schema, and lineage—applied to each stage of the pipeline, the Data Observability Platform understands the health of data in its systems, ultimately eliminating data downtime while maximizing data investments.

While end-to-end visibility certainly optimizes data value throughout the pipeline, scaling data incident management is another point of contention for organizations dealing with massive amounts of data. Willis explained that alert fatigue—or the event in which a large amount of alerts desensitize and disengage individuals—is a significant challenge when addressing scalable incident management.

Willis offered the Monte Carlo Method for Data Monitoring, consisting of the following questions to ensure that incident management scales while still being effective:

  • Is the monitoring valuable?
  • Are alerts delivered effectively?
  • Are individuals empowered to act?
  • During outages, are the right people informed?

These questions are designed to address data noise, ignored alerts, prolonged downtime, and decreased trust that may arise when dealing with data incident management at scale.

Luckily, Monte Carlo also offers a few tools and best practices to further tackle incident data management at scale, Willis explained. Between streamlined communications, a layered approach to user management, domain planning, and notification strategy, as well as blameless post-mortems, organizations can stand up to the growing challenge causing their large quantities of data to be unreliable, untrustworthy, and stale.

For an in-depth discussion and demo of data observability operationalization, you can view an archived version of the webinar here


Sponsors