Addressing Information Imbalances


The information imbalance between purveyors of AI-enabled systems and their oft unwitting subjects is profound. So much so that leading AI researchers point to this chasm as a critical ethics issue in and of itself. This is due largely to the fact that public perceptions or, more accurately, misperceptions can enable (however unintentionally) the deployment of insidiously invasive or unsound AI applications.

Looking beyond AI, rectifying information imbalances is also—or should be—a core objective of any data or analytics governance program. From data policies to data stewards, each governance asset is a tool to bridge a specific knowledge gap; each is a mechanism by which information known to one cohort is shared with another. A policy bridges the gap between corporate norms or executive expectations and individual actions. Standards ensure proven practices are not the exclusive purview of experienced developers. Data stewards ensure business knowledge does not remain locked within a single domain.

When putting governance into action, it is easy to generate policies and procedures aplenty, or to perfunctorily appoint data stewards. But if the information imbalances (aka knowledge gaps) being addressed are unclear, your governance efforts will be, at best, a waste of time. At worst, it will be a reason for your intended constituents to actively disengage. Simply articulating the motive for, method behind, and intended use of every governance asset will go a long way toward bridging those gaps.

Motive, quite simply, answers the why question. Why are we creating this asset or role? To whom does it matter and why will they care? A motive is a statement of intent. Thus, it must explicitly specify the decision or problem space being addressed. Expected real-world outcomes resulting from application of the policy/procedure/system should also be clear. In other words, this policy, perspective, or data product should result in X.

Method addresses how the asset was generated and in what context. It defines the baseline knowledge required to leverage the asset appropriately. At the level of policy, the method might include the stakeholders involved, research conducted, sources consulted, and authorities invoked. For an analytics product—be it a KPI or an AI algorithm—the applied technique, data inputs, and other relevant metadata might be specified. In every case, the situations to which the asset applies should be explicit. Conversely, conditions not addressed or those in which one might reasonably expect (or suspect) an error may occur should also be plainly stated. In the case of an analytics product, this includes communicating the innate strengths and shortcomings of the underlying technique (algorithm).

Finally, is the intended manner of use. Should the subject view the provided asset as a provocation, a recommendation, or an instruction? A policy might establish hard boundaries around what is or is not allowed. Alternatively, it may suggest perspectives that must be considered without prescribing an explicit outcome. The output of an AI algorithm may likewise be intended to be used as food for thought or as a prescriptive order; don’t ask your audience to assume which is the case. Instead, clearly articulate whether the output is an aide-de-camp to promote critical thinking, a consideration to be given special weight, or a directive.

When it comes to the future of AI, addressing the profound knowledge gap between those who wield AI systems and those subjected to them may be one of the most critical hurdles facing us as a global collective. When it comes to realizing value from data, rectifying the knowledge gaps between data creators and consumers is also a critical hurdle for your corporate collective.



Newsletters

Subscribe to Big Data Quarterly E-Edition