Deriving tangible value from AI is dependent on a variety of factors, though perhaps the most crucial—governance—is what determines its safe implementation. Yet evolving regulations, measuring the trustworthiness of data, and the unique needs of each business make governance a more complex task.
In Enterprise AI World’s latest webinar, Building and Managing an Effective AI Governance Strategy—co-hosted by DBTA—experts highlighted practical steps for building and managing an effective AI governance strategy.
David Hendrawirawan, principal advisory consultant, Informatica, explained that “with AI, there are several layers of trust,” which include attributes such as explainability, fairness, transparency and accountability, enhanced privacy, and validity and reliability.
Each of these aspects is dependent on several capabilities: For validity and reliability, capabilities such as data quality management, data versioning, and master data management (MDM) are required. For privacy, data access management, anonymization, and data minimization are crucial; so too is data bias metrics, drift monitoring and observability, and synthetic data enrichment for ensuring model fairness.
In illustrating these capabilities, Hendrawirawan made a critical point: “It’s a lot. Nobody [becomes] perfect over one day; this is a maturity journey.”
“Pace yourself, be patient, and just persist in improving these capabilities,” Hendrawirawan continued.
Danny Sandwell, technology strategist, erwin by Quest, emphasized that the impact of AI “truly is a real game changer. We’ve seen game changers before, but I don't think that anything I’ve seen in my career has aligned with what we’re looking at in AI.”
“I think it’s obviously a challenge for a lot of organizations—but the opportunity is massive,” said Sandwell. Fitting within the webinar’s theme, a massive challenge is security: According to a 2024 Statista report, cybercrime is the third largest economy globally and growing at a 15% CAGR while extensively leveraging AI.
Ultimately, for AI to transform your business—and do so safely—you need a strong foundation for AI, explained Sandwell. This involves building and assuring a data “trust-stack,” which is composed of:
- Data readiness: Data modeling, data catalog and glossary, semantic layer, data marketplace
- Data platform readiness: Multiplatform database management, data movement to modern data platforms, database observability, pipeline efficiency
- AI readiness: Data trust scores, observable data quality including data drift and bias
- Trusted data products in the marketplace: “Trusted” data products used by the business for LLMs and reports, emphasizing compliance, efficiency, and fitness for purpose
Safeguarding the business while accelerating AI innovation, though the goal, is confronted by several daunting realities, noted Ahmet Gyger, senior director of product management, Domino Data Lab. From regulatory exposures to lost trust, slow delivery, and high costs, companies require a strong governance process to manage each of these AI concerns.
It’s crucial to recognize that governance workflows and teams are under considerable pressure to deliver on AI’s promise, often consisting of redundant work, manual compliance, few resources, and a lack of risk identification. The answer to these challenges, according to Gyger, is to implement AI governed by design within a unified, transparent, and scalable approach. This includes:
- Shared policies and aligned teams in one system of record
- Comprehensive visibility and full traceability
- Automated checks and reviews for scaled enforcement
- Governance built-in
This is only a snippet of the full Building and Managing an Effective AI Governance Strategy webinar. For the full webinar, featuring more detailed explanations, a look at more challenges and technological solutions, a Q&A, and more, you can view an archived version of the webinar here.