The key to enterprise-wide AI adoption is trust. Without transparency and explainability, organizations will find it difficult to implement success-driven AI initiatives. Interpretability doesn’t just benefit business users and C-suite execs, either; technically, interpretable models make debugging easier, increase the efficiency of model refinement, and smooth the integration of AI into existing workflows.
Joining DBTA’s webinar, Explainability and Interpretability: Building Trustworthy AI Models, Christian Capdeville, senior director, content and product marketing, Dataiku, and Stephanie McReynolds, VP of product and portfolio marketing, OneTrust, offered their expertise about shedding the “black box” nature of AI to drive trust, and in turn, AI success. This webinar was conducted in partnership with Enterprise AI World.
According to Capdeville, trust in AI translates to organizations maintaining confidence in the many AI products being developed by people, at every stage of the AI journey. This is especially crucial in scaling AI initiatives—the bigger the company and implementation, the more difficult it becomes to manage and deliver explainability.
Dataiku’s approach to trust in AI consists of three pillars:
- AI Governance: Orchestrate and enforce rules, processes, and requirements that align AI initiatives with business, risk, and other objectives.
- Responsible AI: Secure reliable, accountable, fair, transparent, and explainable models and data pipelines.
- MLOps: Enable smooth and systematic operationalization of data projects across stacks.
Auditability and governance are core to Dataiku, baking visibility and control into the model development itself through:
- Auditability and explainability: Versioning and reproducibility; model explainability; fairness metrics; interactive scoring; documentation generator
- Operationalization: End-to-end operationalization; model performance metrics; alerts and checks; control access and permission management
- Monitoring and control: Monitor AI models; model benchmarking; control drift
Notably, the rise in agentic AI has introduced new explainability and trust challenges, according to Capdeville, where AI agents pose various risks relating to their autonomy, their costs, adherence to regulations, and vendor lock-in. To address these obstacles, enterprises need:
- Strong quality assurance tools and effective post deployment monitoring to catch errors early
- Actionable use cases with the ability to deploy solutions quickly and measure impact
- Ability to monitor and control API fees and infrastructure costs
- Strong guardrails and audit trails for every agent action combined with strict data access rights
- Agility to switch from one AI service to another and avoid lock-in
McReynolds explained that establishing proper AI governance and explainability isn’t a data security issue, it’s a context issue. Context is king for governing AI models, as well as data, lending itself to the ability to communicate how AI models behave in nontechnical terms.
“The key change in technology that we really need to be able to govern AI well is an understanding of additional context of the data and the AI algorithms, the context of things that are going into those algorithms," said McReynolds. “You need to tag and understand a lot more context about your data. Not only what this data contains…but also you need to understand the business context of that data.”
Additional contexts necessary for establishing trustworthy data and AI include:
- Data Context: What does this data contain?
- Business Context: What is the intended use of the data?
- Consent Context: What is the customer’s preference?
- Regulatory Context: Which regulations and rules apply?
“This additional context is putting a lot more pressure on how we manage data in our environments and how we evaluate how algorithms are using that data,” added McReynolds.
The OneTrust platform offers a single platform to responsibly collect, govern, and use data, underscored by continuous monitoring and compliance. Through six different solutions centralized within one platform, these solutions “are optimized to support your workflow for governing data and models throughout the entire lifecycle," McReynolds explained.
This is only a snippet of the full Explainability and Interpretability: Building Trustworthy AI Models webinar. For the full webinar, featuring more detailed explanations, a Q&A, and more, you can view an archived version of the webinar here.