Only a quarter (25 percent) of businesses have a fully implemented artificial intelligence (AI) governance program. That’s according to new research from AuditBoard.
The From blueprint to reality: Execute effective AI governance in a volatile landscape report revealed that while many companies have drafted policies, few have embedded AI governance into their organizations’ operational fabric, leaving them susceptible to unforeseen risks.
The findings are based on a survey of over 400 governance, risk and compliance (GRC) and audit professionals across the US, Canada, Germany and UK.
The AI governance gap
Organizations across industry sectors are racing to integrate AI tools into their core business processes, seeking productivity gains and competitive advantages. However, this momentum has triggered a parallel challenge: managing the associated risks.
The resulting AI policy-practice gap is emerging as a new risk frontier, rooted in executional uncertainty, cultural fragmentation and misaligned ownership.
AuditBoard’s report found that, while 92 percent of respondents are confident in their visibility of third-party AI use, only two-thirds of organizations report conducting formal, AI-specific risk assessments for third-party models or vendors. That leaves roughly a third of firms relying on external AI systems without a clear understanding of the risks they may pose.
What’s more, while 86 percent of respondents are aware of upcoming AI regulations and those already in force, few have made the leap from documentation to disciplined execution.
Register for PEX Network’s All Access: AI in Business Transformation 2025!
What’s hampering AI governance?
Barriers to AI governance are cultural, not technical, according to the research. Respondents identified the leading obstacles to AI governance as a lack of clear ownership (44 percent), insufficient internal expertise (39 per cent) and resource constraints (34 per cent).
Less than 15 percent view a lack of tools as the main problem and, while policy tells the organization what should happen, culture and structure determine whether it actually does.
“This report validates the critical need for a more integrated, operational approach to AI risk,” said Michael Rasmussen, CEO of GRC Report.
AI governance today is a test of execution, not awareness, added Rich Marcus, chief information security officer (CISO) at AuditBoard. “This report confirms that the most persistent AI governance challenges are clarity, ownership and alignment. Organizations that treat governance as a core capability, not a compliance box-checking exercise, will be better positioned to manage risk, build trust and respond to a rapidly evolving regulatory landscape.”
What does AI governance look like?
According to Doug Shannon, intelligent automation and AI thought leader, AI governance should establish clear oversight that defines the chain of thought, reasoning and custody behind AI outputs, ensuring transparency and auditability.
“It should also include alignment on the company’s purpose for using AI and a mechanism for employee feedback to keep the systems grounded in reality and continually improving,” he said.
7 key aspects of AI governance
- Ethical principles
- Regulation and compliance
- Accountability
- Risk management
- Transparency and explainability
- Data governance
- Human oversight
However, AI governance is not without its challenges. Obstacles range from a lack of transparency and explainability, global fragmentation and privacy risks to misuse, capability and resource gaps and a lack of testing and auditing standards.
[inlinead-1]