<< back Page 2 of 2

Data Governance Gets Meshy


Anything Does Not Go

The data mesh framework prioritizes agility and puts the means of data production in the hands most familiar with it. Data mesh is not, however, synonymous with “anything goes as long as the job gets done.” By embracing federating governance, the data mesh promotes flexibility within domains without sacrificing interoperability and integration. In fact, organizations with robust existing governance and adaptive enterprise architecture practices are best positioned to transition to the distributed data ecosystem the data mesh represents.

Governance requirements that transcend domain boundaries include:

  • Defining data domain boundaries. At first blush, it seems simple: Data domains align with lines of business or business functions. In practice, however, data domains should align with operational business accountability. Such boundaries may, in practice, be delineated by product or service lines, process ownership, or even market segmentation. Right-sizing and properly aligning data domains is foundational, as it establishes organizational, budgetary, and resource requirements. This is a nontrivial task for traditional data governance programs and for data mesh frameworks.
  • Defining policies for shared use, privacy, and security. The need for clear policies and SOP to address global business standards, including legal, regulatory, ethical, and standard operating guidelines, is not new to mesh governance. However, the shifting boundaries between data product owners and consumers and federated means of production may raise the level and type of cross-functional engagement. Establishing clear decision rights and guidance on what decisions are and are not within the purview of individual domain owners is foundational. The process for mediating and resolving inevitable interdomain conflicts and disagreements must also be defined. This does imply that net new pathways must be created. Use of pre-existing mechanisms (be they standing executive or governance committees, boards, and so on) is encouraged.
  • Enabling shared data product catalogues. Increasing the number of data products ready for use “off the shelf” (so to speak) is for naught if the products on the shelf are in a locked or hidden cabinet. Establishing a common mechanism for searching available data products is key. Advanced cataloguing apps utilizing active metadata or graphing technologies can provide sophisticated capabilities. But to start, even a consistently well-maintained spreadsheet will do. Like any good reference list, the catalogue should include standard attributes applicable to all data products, as well as secondary elements relevant for different data product types. Likewise, a standard format for documenting data product specifications can greatly increase self-service and improve compliance with applicable policies and regulations. Data sheets, model cards, valuation, and risk summarizations all play a role here.
  • Standardizing data valuation and cross-domain prioritization. All data domains are not created equal, nor are all data products of equal utility and value. Organizations must have a clear, preferably enterprise-wide, standard for how the costs and value of data products will be calculated and shared. In other words, determining if and how costs of creating and maintaining these products will be passed onto or shared with consumers. Assessing value can become complicated quickly, considering the myriad ways in which data products may be consumed, including accounting for data products intended to be embedded “inside” of other products and services. Some organizations may keep things simple by budgeting for data product creation solely within each business domain. Others may employ methods to assess the reasonable “sales” price depending on usage or other metrics. Whatever the method, this is a conversation to be had when establishing the practice, not as an afterthought.
  • Developing shared infrastructure and integration standards. This is not to be confused with prescriptive use of centralized data environments (although data mesh does not rule them out entirely). Rather, think infrastructure-as-a-service. This can range from providing common infrastructure for quickly deploying customer data pipelines to quickly standing up common analytics platforms and toolkits. Organizations familiar with deployment of microservices and APIs can likewise leverage those design patterns and protocols to enable integration and interoperability of data products across domain boundaries.

More, Not Less, Investment and Governance

Without a doubt, adoption of the data mesh construct promises increased availability and utility of data throughout an organization. However, it would be remiss to not highlight that adopting a data mesh approach will also likely:

  • Require more disciplined governance. As noted above, a number of common data paradigms shift when adopting the data mesh. All of which aim to disperse authority and increase the creation and consumption of data products within the organization. Note bien: Disciplined does not equate to centralized or monolithic. It does, however, imply a keen awareness of what is innately common and what is or does not need to be. Organizations with highly functioning and adaptive governance frameworks will find the transition easier.
  • Increase funding requirements for data and analytics. The objective of a data mesh—or indeed any federated or distributed governance model—is to seed skilled resources throughout the organization, therefore increasing the need for sustained investment in data and analytics disciplines and roles, along with a supporting cast including product and change management. It is unlikely that the solution will be to simply divvy up and apportion existing enterprise resources across domains or to simply add data to the portfolio of existing product managers. Companies that have a solid appreciation for the business value of data and analytics and a track record of investing in the requisite skills will find this an easier hurdle to overcome, as will those who have previously invested in localized data and analytics teams.

Ending on a High Note

Increase the value generated by enterprise innovation centers. Historically, centers of excellence for data and analytics floundered under the weight of unrealistic expectations. Expected to serve as the strategic, forward-looking consultant, the service provider, and the gatekeeper for emerging technology, they did a little of everything—yet not enough of anything to satisfy anybody.

In an era of more widely dispersed resources, the onus will not be on such centers to both innovate and provide the core labor force for operationalizing new capabilities. Rather, the focus can truly be on innovation: seeding ideas and providing a collaborative proving ground open to all creators, thereby allowing innovative practices and thinking to rapidly disperse throughout the enterprise—within and beyond a single business data domain.

<< back Page 2 of 2


Newsletters

Subscribe to Big Data Quarterly E-Edition