Most times, if asked, I will rattle off, without hesitation, a list of ways AI is being applied along with all the ways it can go wrong. You may also get bit of a diatribe about why speaking of AI in the royal sense (i.e., “the AI”) is always a bad idea. It’s practically a (bad) party trick. Yet recently, when asked to characterize the primary ethical issues in AI, I had a perfect moment of mental blankness. It was an uncomfortable and instructive moment on many fronts.
On the upside, that awkward moment of silence and verbal fumbling sparked a question: How many employees, faced with questions and under pressure about the use of AI, find themselves in exactly this spot on a daily basis? Might this partially explain why they hesitate to put forward new ideas, to push back on questionable practices, and generally view AI governance as a murky bog to be avoided at all costs?
Given today’s hypered AI expectations, employee enablement must be a pre-eminent focus. AI-enabled systems infiltrate most aspects of today’s business, often unseen or unremarked upon, as when vendor systems are purchased without full disclosure or vetting of their embedded AI componentry, or individuals use readily available tools such as ChatGPT to augment their work in an unofficial capacity. Add in exuberant market enthusiasm for all things AI everywhere, and we have a perfect storm. Even the best prescribed policies and standards do not stand alone.
Good governance prescribes limits to ensure developed products are fit for use and do not run afoul of consumer expectations and regulatory or compliance requirements. But great governance prioritizes an additional objective: enhancing organizational literacy and self-awareness. Allowing every employee, no matter their level, to think critically about when and where an AI-enabled system makes sense. Equally important: raising their hand when it doesn’t.
Creating broad functional literacy requires investment. Yes, in terms of time and people, but also in terms of tools. While a comprehensive blueprint is beyond the scope of this article, consider these three types of collateral to jump-start your AI literacy program:
Think of this as a basic guide to your analytics and AI toolbox. Not the outputs thereof, but the tools themselves. The intent is not to drown the reader in technical detail, nor to make them an expert on any specific technique. Rather, the catalog aims to familiarize all employees with the breadth of analytics and AI tools available, further arming them with the base knowledge to identify if a particular tool (e.g., algorithmic technique) is reasonable to consider.
- A hammer is useful for …
- When using a hammer be sure to/not to … (place your thumb between the hammer and head of the nail)
- Consider using a screwdriver if …
The materials catalog should also provide direction on resources to consult and/or processes to invoke when wielding each tool.
I will note that my initial inclination was to call this a product catalog. However, this risks confusion with an index of the organization’s finished AI goods. Such an accounting may soon be mandated as part of emerging regulations in the EU and beyond. A comprehensive AI reference library will, therefore, catalog both raw algorithmic elements and finished products.
An organization’s governance or ethics principles often sound clear in concept but become murky in reality when context comes to play. This is a well-known problem and one in which legal jurisprudence provides a guide. Laws and statutes establish rules and rights which are interpreted in practice through case law.
In the same way, an AI case reference provides valuable insight on how your principles and regulations are intended to show up in practice. Such reference libraries are useful in highlighting your organization’s clear red lines, more so in demonstrating the trade-offs to be considered relative to context-sensitive reputational or ethical boundaries. This is particularly true when regulatory, legal, reputational, or ethical codes conflict, as is often the case.
As discussed previously, such references are not immutable, fixed precedents. Even so, they remain one of the best methods to concretely demonstrate corporate codes of application relative to AI. To that end, your AI reference library should include applications that were greenlit with constraints as well as those stopped. External cases are useful early on, although they typically highlight negative examples rather than positive ones.
- AI Quick-Start Assessments
AI applications traverse a broad cross-section of regulatory, compliance, and legal obligations. This is in addition to your internal rules and codes of conduct. To ease the burden of entry for teams at all stages of development, consider creating guided, quick-start assessments that quickly surface and scope applicable governance requirements.
As Chris McClean explains in Pondering AI, Episode 26, this type of guided exploration helps remove barriers and improve engagement in ethical inquiry and regulatory compliance. Each assessment is a short scoping survey built into each gate of the development lifecycle, from ideation to deployment, thereby ensuring that critical considerations do not get overlooked for lack of awareness or in an exuberant rush to deploy.