Rather than waiting for perfection (aka abandoning AI), leading adopters confront these realities head-on—in the context of the problem to be solved. In some cases, this means starting with simpler models, and less risky or more well-bounded problems, and in others, engineering more robust safety controls or enforcing tighter operational boundaries. Very often, it helps to delegate the machine to an advisory role or sometimes foregoing machine-driven decision making altogether.
While AI may present novel challenges, you don’t always need novel methods to rationalize AI’s risk and reward. There are well established risk management frameworks, auditing practices, and engineering quality control methods including safety engineering protocols in use today. Financial services institutions have used independent internal auditors to certify statistical and analytic outputs well before AI entered the scene. Closer to home, the Partnership on AI and Google recently published a research paper exploring an end-to-end framework for algorithmic auditing (https://dl.acm.org/doi/abs/10.1145/3351095.3372873).
Incorporate Ethical Considerations Into the AI Lifecycle
A few bad actors aside, most companies that have gotten into hot water for unethical AI applications did not set out to do harm. However, it is easy to miss something if you aren’t looking, or you are overly focused on a singular objective to the detriment of a wider perspective.
Bias and other unintended, untoward outcomes can be introduced through the data, the algorithmic design, or the environment into which an AI application is deployed. Identifying potential sources of error and mitigating risk requires discrete effort throughout the life of a model. Activities run the gamut from crisply defining the desired operating environment and outcomes to assessing data constraints to performing formal risk assessments and cross-functional reviews at key stages of development.
As a recent Microsoft research paper (www.microsoft.com/en-us/research/publication/co-designing-checklists-to-understand-organizational-challenges-and-opportunities-around-fairness-in-ai) explained, making ethics tangible for practitioners can be challenging. And every AI project does not engender the same level of risk or exposure. Nevertheless, failing to deliberately incorporate discrete ethical activities into existing development processes is a surefire way to ensure they are not considered.
Create a Collaborative Community of Practice
AI benefits when diverse perspectives are brought to bear. Formal teaming structures facilitate some intersections but are limited to point-in-time projects. To stimulate broad, organic engagement, leading companies nurture internal social networks with affiliated interests. They do this by creating channels for individuals with diverse backgrounds and experience to ask questions, compare experiences, and share emerging practices and learnings. Communities do not need to be ethics-specific: The topic interlocks with AI and other data-related interest groups. There is also a natural alignment with existing diversity and corporate social responsibility initiatives.
Even the most enthusiastic community requires appropriate tools and support to thrive. Some community engagement can be facilitated through tools such as Slack, Teams, or GitHub. Other activities require deliberate planning and should work in concert with more formal literacy programs. Such activities might include sponsoring lunch-and-learns, hosting hack-a-thons, networking opportunities, and providing hands-on training (extra credit if those are hosted in your own innovation lab).
Support Literacy
As the collective appetite for digital transformation and enabling capabilities such as AI has become ravenous, so too has the call for improved data and technical literacy. Yet, despite the growing calls for employees to embrace continuous learning, funding and availability of training can be limited. When provided, literacy programs often focus solely on technical capabilities—data quality, data management, and AI algorithmic development—and not the context in which these applications will be leveraged.
For an organization to broadly embrace an ethical approach to AI, the definition of literacy must expand. Comprehensive literacy programs provide multi-dimensional training paths, including teaching fundamental AI and data concepts (not coding) to business audiences, teaching fundamental business concepts to technical audiences, enhancing communication skills for all (e.g., at the risk of sounding flippant, how to talk to a data scientist, how to ask better questions), and creating a common understanding of the corporate, regulatory/legal, and social ecosystem to which they are accountable.
What’s Ahead
These emerging practices are not comprehensive. Nor do they address the many tactical capabilities that enable ethical AI, including robust data management capabilities such as data quality, privacy and security, explainable AI (AIX) techniques, DataOps, and MLOps. But rigorous data and model validation can only be evaluated in the context of clearly defined intended outcomes. The ability to deploy models at scale won’t score points if those models negatively impact your customers or damage your brand.
Without a solid governance foundation to build upon, even technically capable companies can founder. And far from inhibiting innovation, instituting an appropriate level of rigor into AI development promotes agility and adoption. It is no coincidence that companies who report deploying AI broadly in the enterprise, also report increased focus on transparency, ethics, and operationalizing AI governance.