Confront Uncertainty Early and Often
In a prior column, the need for governance programs to incorporate research and development into their work was discussed.
Governance can also benefit from adopting more robust methods to assess the impacts and implications of emerging AI-enabled applications. Stealing from the futurist/strategist playbooks, practices such as scenario planning, pre-mortems, and red teaming help clarify the level of uncertainty and risk an organization is prepared to tolerate.
To be clear, scenario planning is not intended to test your ability to predict the future. Rather, it asks you to accept that you can’t and to stretch your thinking to consider nonconventional views. A common refrain is that scenario planning projects what might happen if things get better, get worse, or get weird.
Done well, scenario planning sheds light on the range of acceptable outcomes and level of risk the organization is prepared to take on in specific business and technical contexts.
Pre-mortems are a variation on scenario planning that starts at the undesired end, namely, with the premise that the project has failed. The team is asked to imagine all the ways and means by which failure could arise. As the name suggests, pre-mortems occur before a project begins. Pre-mortems traditionally focused on identifying failure modes exclusively. Modern approaches also consider the implications, including second- or third-order effects if a project is wildly successful.
Red teaming is another useful method to stress-test a solution. In this case, the intent is to shake out risks or harms that might otherwise go undetected until after a solution is live. In AI today, this is a hands-on experience, as friendly hackers are invited to conjure up a multitude of undesirable (or unexpected) ways a system could be used and then to make it so. While red teaming can be applied during solution ideation, it is more commonly applied during solution development and validation.
Not all AI-enabled systems and decisions require this level of analysis. However, in the case of rapidly evolving technologies with unknown/untested risk profiles, these methods provide a structured way to confront uncertainty. By enabling teams to think critically about systems whose behavior is not yet proven or immediately obvious, you also increase the chance that appropriate guardrails and boundaries are established for the system-to-be. Yes, you may still get it wrong, but odds are, you’ll have a more informed inkling about why, as well as a head start on addressing any harms that do materialize.
Reuse and Repurpose
As humans, we are drawn to novelty. New and unusual circumstances command an outsized portion of our attention.
This can, at times, cause us to over-index on what is different at the expense of what is the same. When tools such as GenAI/LLMs appear on the scene, their shiny new capabilities take center stage. We are mesmerized by the unimaginable improvement in the quality of generated images and text. We become enraptured by the ability to interact with the system using plain English (so to speak) and with the ability to process multimodal inputs. Yet, despite these bedazzling capabilities, they remain, at their core, predictive data models.
It does not follow that existing governance practices need not apply. Yes, the scope and scale at which AI-enabled systems can be deployed change the risk and reward equation. But it is sophistry to suggest that if the latest tech innovation does not preserve privacy, then your privacy policy is moot, or to suggest that established product safety and reliability standards don’t extend to a new consumer product merely because it has an AI algorithm inside. Moreover, the use of an AI model does not absolve a company of the need to operate in nondiscriminatory ways nor does it spontaneously change existing rules governing data consent and sharing. In fact, this is exactly the opposite of the way governance should work.
On one hand, working from a blank page tends to tip an organization (or individual) into exaggerated risk-taking or exaggerated caution. Both can lead to suboptimal outcomes.
On the other hand, assessing new applications in the context of established policies, procedures, and regulations provides an intuitive jump-off point. It is also the quickest path to identifying gaps in your existing governance landscape.
Consider the use of an LLM-enabled chatbot in customer service. LLMs are the digital equivalent of your chatty, gossip loving aunt. They are not, by nature, privacy-preserving. So, should the intrinsic limitations of this AI technique in and of itself change your company’s stance on customer privacy? Certainly not. If privacy is a priority—on regulatory, brand, or ethical grounds—said policies will appropriately impose constraints on how LLMs are used.
If the fact that an LLM cannot guarantee privacy in certain contexts changes your stance, customer privacy may not have been that important to start with. Or, your corporate priorities have shifted. Modifying the existing policy to reflect the new norm ensures the impacts of such changes are subject to a critical assessment. In the meantime, the existing policy provides transitory guardrails for anyone contemplating the use of an LLM or whatever the latest emergent capability may be.
Across time, corporate priorities will continue to shift while business practices and technology capabilities likewise evolve. As they do, your governance ecosystem should adapt accordingly. Just don’t start with the proverbial blank sheet of paper.
Regardless of your company’s analytic maturity, there are a plethora of existing governance tools at your fingertips. They range from data management practices to privacy and security protocols; from risk-management processes to regulatory controls; from product safety standards to site reliability engineering (SRE) practices. Adaptive governance starts by maximizing use of the tools at hand. Robust foundational practices, processes, and controls are resilient by design: They prioritize reusability but allow for refactoring or selective additions when necessary, thereby making the entire toolkit more expansive yet maximally efficient and effective across time.