The rapid adoption of AI has heightened concerns about appropriate use, privacy, and the societal impact of biased or inequitable applications. As a result, many institutions and partnerships have created ethical guidelines for AI.
Yet, while many enterprises are codifying ethical AI principles, it is not always clear how (or if) they translate to day-to-day application. In fact, emerging research suggests that familiarity with a code of ethics does not, in and of itself, influence behavior. For ethics to take root, sustainable governance practices must be infused into the fabric of an organization’s AI ecosystem. Awareness, literacy, corporate commitment, a shared purpose, and personal responsibility all play a role.
For more articles looking at the current state of technology and what's ahead for 2021, download the Data Sourcebook.
So, how can organizations bridge the gap between AI ethics in principle and in practice?
Principle Practicalities
Your AI principles look good on paper. But are they practical? Here are some approaches to consider when putting your AI principles to work:
Favor Cogent Explanations Over Perfunctory Proclamations—You would be hard-pressed to find a credible organization that doesn’t support equality in principle. But do people understand how inequality might show up in the organization’s AI work? Or, broadly speaking, what behaviors and actions are required to close the loop between good intent and applied practice? In medicine, respect for people is a first principle. Respect (principle) is demonstrated by affording individuals dignity and autonomy (core values) through informed consent (practice). Principles not linked to practice in this way are interesting but not instructive.
Encourage Critical Thinking, Not Just Compliance—Consider the recent spate of algorithms found to be biased against women. Although gender was often not a discrete datapoint, other variables such as resume pronouns, salary, title, and promotion history served as proxies. Compliance-centric approaches favor yes/no tasks (e.g., “verify gender is not an input”) over critical thinking (e.g., “is gender appropriate in this context?” or “how else might gender be represented in our data?”). Effective governance provides a framework for thoroughly examining a problem from multiple angles, not merely creating a to-do list to check off.
Prioritize Contextual Guidelines Over Constraints—It is tempting to govern by fiat: explicitly prescribing what can and cannot be done. Resist this urge. Governance by constraint alleviates individuals of responsibility by passing the buck to the luckless owner of the rule book. Someone else didn’t anticipate that consequence or this application? Well, “they” (the feckless someone else) said it was OK. Or—more damning—“they” didn’t say it wasn’t.
Furthermore, the scope of problems that can be categorically addressed without an impressive list of “if-then-buts” is quite narrow. And you can, of course, document only what you know or can imagine now. Ethical governance dependent on perfect perception and foresight is doomed.
Make Consideration Integral to Cadence—Mindfully evaluating the risks, rewards, and ramifications of AI solutions requires time. Creating that space can seem counter to the fail/learn fast mantra frequently espoused for data science projects. However, this doesn’t need to be an all-encompassing science project (no pun intended). The incremental time commitment throughout the project costs far less than a project that fails to see the light of day or the reputational hit when a foreseeable error brings the system down post-deployment.
Organizing for Success
There is no one-size-fits-all blueprint for organizational governance. Your company culture; incumbent organizational dynamics, analytics, and data capabilities; digital maturity; and risk tolerance all influence the shape of your governance framework. There are, however, emerging practices every organization can utilize to navigate this dynamic and rapidly evolving playing field. While not exclusive, the following practices can provide an on-ramp for ethical AI governance:
Align principles with actions—Turning principle into practice requires establishing clear expectations for what is required. What, for example, constitutes informed consent? Who can consent? What level of detail regarding the providers involved, procedural steps, potential outcomes, and alternate approaches is required? Does rattling off every potential side effect in excruciating detail constitute meaningful communication? How will the information be communicated: in written format, verbally, or both? Can informed consent be superseded by other concerns? You get the gist.
In the case of AI, evolving regulations can provide initial guidance. For instance, requiring companies to clearly notify consumers when a decision is made by an automated system and providing the right to human review when a decision is disputed. However, consumer and societal expectations are broader than legal regulations: Compliance is, therefore, minimum table stakes.
Publish substantive reference cases—Laws and statutes establish citizen’s rights and accepted behaviors. Case law (i.e., court judgments and their rationale) clarifies how those established norms are applied in real life. Establishing an “AI case reference” is equally valuable in illustrating how ethical principles should be applied to day-to-day AI work.