BSI highlights critical gaps in AI governance

30 October 2025

The British Standards Institution (BSI) has released its first RADAR Threat Intelligence Brief, revealing alarming gaps in how organisations govern the deployment and management of artificial intelligence (AI).

Based on an analysis of over 100 multinational annual reports and surveys of 850 senior leaders across multiple countries, the study warns that many businesses are underprepared for the risks associated with AI, despite increasing investments.

The report shows that 62% of senior executives plan to increase AI investments in the coming year, citing productivity gains (61%) and cost reductions (49%) as key drivers. However, only 24% of organisations worldwide have established formal AI governance programs, with a modest rise to 34% among large enterprises (over 250 employees). Furthermore, just 24% actively monitor employee use of AI tools, and only 30% have formal processes to assess AI-related risks and mitigation strategies. Alarmingly, only 22% restrict staff from using unauthorised AI, exposing organisations to potential vulnerabilities.

The BSI analysis identified significant international variation in AI governance focus. UK companies reference AI governance and regulation 80% more frequently than firms in India and 73% more than those in China. Yet, only 29% of UK businesses reported having an AI governance program, with small firms lagging at just 14%. The report warns that smaller businesses may face serious challenges if they attempt to address AI risks reactively rather than proactively.

Awareness of training data sources is declining, with only 28% of leaders knowing the origins of their AI training data — down from 35% earlier in the year. Additionally, only 40% have clear processes for handling confidential data in AI training, increasing the risk of data misuse. Less than half (49%) now embed AI-related risks within broader compliance frameworks, and only 30% conduct formal risk assessments for AI vulnerabilities.

Financial services organisations lead in focusing on AI-related risks, especially cybersecurity, reflecting their emphasis on protecting consumer data and reputation. Conversely, technology and transportation sectors place less emphasis on governance and risk management, indicating sector-specific differences in maturity levels.

The survey highlights that only 32% of businesses have processes for recording AI errors or failures, and just 29% have incident management frameworks for AI-related issues. About 18% believe their operations would halt if generative AI tools became unavailable. Additionally, 43% of organisations report diverting resources from other initiatives to fund AI investments, while 29% struggle with service duplication across departments.

Training and workforce readiness also lag behind. The term “automation” appears nearly seven times more often than words related to upskilling or training. Over half of leaders (56%) are confident that entry-level staff can effectively use AI, and 57% believe the organisation as a whole is prepared. However, only 34% have specific learning programs for AI, and much of the existing training appears reactive, driven by fear rather than strategic planning.

“While AI holds enormous potential, without strategic oversight and clear guardrails, organisations risk legal, operational, and reputational challenges. Divergent approaches across markets create vulnerabilities, and overconfidence in ungoverned AI can lead to avoidable failures,” said Susan Taylor Martin, CEO of BSI.

BSI advocates for organisations to embed comprehensive AI governance structures beyond mere compliance, emphasising proactive risk management, responsible data use, and workforce upskilling. Failure to do so could leave organisations exposed to significant legal liabilities, security breaches, and damage to reputation.