UK organisations overconfident in AI security despite gaps in preparedness

19 January 2026

A new survey by ANS reveals a significant disconnect between UK organizations’ perceptions of their AI security readiness and the reality of their security measures.

Despite widespread confidence—85% of senior IT decision-makers believe they have invested sufficiently in AI security—many lack proactive strategies and targeted investments to address AI-specific risks.

The study, which involved over 2,000 senior IT leaders across various sectors and organization sizes, found that only 42% embed security considerations early in AI projects, and just 37% prioritize security during AI deployment. Instead, many organizations tend to deploy AI tools and then react to threats as they emerge, rather than designing defenses from the outset.

Spending on cybersecurity remains high overall, with 53% allocating between 11% and 30% of their total IT budget to security. However, this investment primarily targets traditional infrastructure and applications, with minimal focus on AI-specific vulnerabilities such as model manipulation, prompt injection, or data leakage. Only 39% plan to invest in securing AI training processes over the next three years, including controls to prevent model poisoning and monitor for abnormal outputs.

The research highlights a significant gap in human resource investment as well—only 34% of organizations intend to spend on staff training for secure and responsible AI use. This is concerning given that employees often serve as entry points for cyberattacks, whether through sharing sensitive prompts, falling for AI-driven phishing attacks, or relying on manipulated outputs for decision-making.

Kyle Hill, CTO of ANS, emphasized that many organizations assume existing cybersecurity measures automatically extend to AI, which is a dangerous overconfidence. “AI introduces new attack surfaces and vulnerabilities that require dedicated governance, model security protocols, and employee awareness,” he said. “Without a proactive, risk-led approach, organizations leave their AI systems exposed to misuse and manipulation.”

ANS warns that treating AI security as an afterthought or merely a compliance checkbox leaves organizations vulnerable to emerging threats. As AI becomes more embedded in core business operations, the proportion of security budgets and staff training dedicated to AI-specific risks is expected to come under increased scrutiny.

Overall, the survey indicates that the UK’s AI security landscape is characterized by a false sense of security—highlighting the urgent need for organizations to shift from reactive defenses to strategic, proactive security planning that recognizes AI as a distinct and critical attack surface.