External examples can help although the popular press favors examples of AI gone extremely wrong. A good AI case reference will establish precedents for both ethical and non-ethical applications. Furthermore, your organization’s catalog should illustrate how to determine relevancy, assess risk, and make appropriate trade-offs in the context of your specific corporate ethos and application space.
As your experience grows, more explicit usage patterns and standards will be identified. Such precedents and patterns are not immutable. Rather, they provide context for evaluation. In every case, teams must assess how the current scenario differs from previous applications. What is known now that wasn’t known before? How might the consumer’s expectations change in this context? And so on. Remember: Consumer behaviors, corporate priorities, technical capabilities, legal/regulatory obligations, and cultural attitudes are ever changing. Governance must adapt accordingly.
Assign ownership—Adding AI to the mix doesn’t change how your organization allocates ownership for products and services. AI algorithms are embedded into digital products, not independent of them. It logically follows that the responsible party is the owner of the associated offering.
This does not mean that people are on their own when determining whether a given AI application is appropriate. Rather, it means that each team bringing AI-enabled products to market must ensure they uphold established principles and standards. Defining said principles is typically the mandate of cross-functional committees responsible for ensuring prevailing cultural, societal, legal, and organizational priorities are clearly defined.
A final note: While the internal logic of an AI algorithm is often inscrutable, the results are not. Therefore, the extent to which stakeholders at all levels are incented to raise concerns and take responsibility for outcomes is the extent to which ethical consideration becomes business-as-usual.
Implement formal advisory boards and review mechanisms—The need to incorporate diverse voices representing ethical, legal, social, business, and technical perspectives into AI decision making has been well-established. This is not as simple as merely hiring an ethicist. Ethical governance must inform decision making at all levels: strategic to tactical. Formalizing accountability and responsibility for ethics requires engagement from multiple parties.
Depending on the size of your organization and your AI aspirations, the formalizing of responsibility may include:
- An executive broadly accountable for enterprise ethics strategy and execution. While this often falls to a chief data or analytics officer, executives accountable for digital transformation, customer experience, or risk may take point depending on your current AI priorities.
- An executive steering committee to sanction the overarching strategy, approve principles, and ensure engagement, funding, and compliance.
- Ethics council(s) accountable for defining discrete ethical principles, policies and standards.
- Ethics advisory groups to provide counsel and advise on emerging trends and capabilities. Representation may include external partners or collaborative groups focused on ethical AI, academic and technical experts to speak to emerging technical capabilities, and partners or customers to ensure alignment with external expectations and needs.
- Ethics champions or stewards who create and maintain detailed practice guides, standards and rules; advise on their tactical application; and adjudicate operational issues.
- Internal auditors composed of business, data, AI, and regulatory experts to validate an AI application; meet established ethical standards for usage, privacy, fairness; and so on.
- Advisors or subject matter experts incorporated into program, research, or project teams to facilitate tactical execution of ethics-related activities.
When possible, organizations should leverage existing governance functions rather than creating additional, parallel decision-making structures.
Whether you are extending an existing committee or starting anew, a guide to creating effective AI and ethics committees (www.accenture.com/us-en/insights/software-platforms/building-data-ai-ethics?-committees) from Accenture can help. In addition, a recent paper by Dr. Thilo Hagendorff titled, “The Ethics of AI Ethics” (https://arxiv.org/ftp/arxiv/papers/1903/1903.03425?.pdf), provides a succinct evaluation of the dominant ethical frameworks and perspectives on their effectiveness.
Be sure to do the following:
Acknowledge Uncertainty
As the adage goes, the greatest fear is fear itself. The very perception of risk in AI can stymie innovation and halt adoption in its tracks. Yes, AI algorithms will make mistakes. No, their conclusions aren’t always predictable or guaranteed. But the need to act in the face of uncertainty and the propensity for error isn’t unique to AI. In this respect, AI is human too. The difference comes in the scale and speed at which AI can be deployed.