Governance Is for People, Not Machines


AI system interfaces are becoming more engaging. Exag­gerated expectations for AI agents and AI colleagues are swirl­ing, as are attendant discussions of how to control them. With all the focus on rapidly evolving AI capabilities, it’s easy to lose sight of where the burden for responsible innovation lies, as well as what (or who) the subject of governance is.

AI governance or data governance? Naming conventions aside, the subject of governance is not AI, and it is not data. Generative AI interfaces make use of natural language and interactive formats to engage users. Data is used within AI and analytic systems to derive, discern, and influence human endeavors. Yet, despite features that appear humanlike, data and AI systems are inanimate objects. Decisions about how such systems are developed and deployed reside solely with humans. It is those decisions and their downstream effects that governance addresses, thereby making governance a manifestly human endeavor.

As debates around aligning or endowing such systems with values and self-directed objectives continue to proliferate, this is a critical distinction. The objective of governance is to guide and direct human decision making. This is true even in the case of agentic AI systems which are not, wishful hypothesizing aside, self-actualizing or propagating. If, why, when, where, and how data and AI systems are deployed remain solely in human hands.

A related tendency is to talk about governance in the context of mechanisms. How do policies get drafted? Who gets to make what decisions? What tools are used to monitor or enforce stan­dards or rules? Which tools will be used to monitor quality and identify variances?

Governance does not, however, live in staid policy docu­ments or RACI matrixes. It does not live in the tools used to implement and enforce standards and measure outcomes. Gov­ernance is expressed in the decisions made each and every day by employees at every level of your organization.

Without a doubt, execution of governance decision mak­ing is most visible—or explicitly acknowledged—at the exec­utive level. Proclamations of organizational values, strategic priorities, and directional guidance are typically widely visible. The success of your governance initiatives, however, is reflected in the decisions made at the ground level.

What business problems do teams choose to solve? Which technologies are applied to what problems? What actions are allowed and within what boundaries? How does the risk tol­erance of the organization align with the risk profile of the selected solution? When do you call a stop or decide to proceed with a proposed application or process? Governance, again, is a manifestly human endeavor.

Emerging technologies can stretch, strain, or exceed the boundaries of existing governance directives. They should not, broadly speaking, fundamentally change an organization’s values and red lines. Sometimes new capabilities pose new questions to be considered. Other developments may lead to reconsideration of previous no-go decisions due to resolution of formerly unre­solvable constraints or risks. This does not mean that technical features dictate governance boundaries.

Feature-led governance abdicates decision making to solu­tion providers. This is an incredibly risky strategy for a multi­tude of reasons. The least of which is that sans a single, colossal enterprise application, it ensures every application presents a discrete ethical and risk posture. Good governance does require technical literacy. But without well-defined expectations against which to evaluate emergent capabilities, there is no governance.

Without a doubt, visions of self-realizing systems are mes­merizing. Focusing on mechanisms is endlessly distracting. Neither result in good governance. Human decision making lies at the heart of responsible, and irresponsible, innovation. The question then is: Do you know what decisions your people are making and why? 



Newsletters

Subscribe to Big Data Quarterly E-Edition