Exabeam highlights rising insider threats fuelled by AI

27 August 2025

Exabeam, a leading global provider of intelligence and automation solutions for security operations, has released a comprehensive multinational report revealing a dramatic shift in cybersecurity risks.

‘From Human to Hybrid: How AI and the Analytics Gap Are Fuelling Insider Risk’ draws on a survey of 1,010 cybersecurity professionals across key sectors, uncovering that insider threats have now overtaken external attacks as the most pressing security concern, a trend accelerated by advances in artificial intelligence (AI).

The research indicates that 64% of respondents perceive insiders — whether malicious or compromised — as a greater threat than external adversaries. Generative AI (GenAI) plays a significant role in this shift, enabling attackers to conduct faster, more covert, and harder-to-detect attacks.

Steve Wilson, Chief AI and Product Officer at Exabeam, commented on this evolution, emphasising that insiders are no longer just individuals but AI agents capable of logging in with valid credentials, impersonating trusted voices, and moving at machine speed. He highlighted the challenge for organisations to detect when legitimate access is being exploited or abused.

Insider activity is increasing across industries, driven by both malicious intent and accidental breaches. Over half of organisations (53%) have experienced a rise in insider incidents over the past year, with 54% expecting this upward trend to continue. Governments are facing the steepest increase, with 73% predicting a rise, followed by manufacturing at 60% and healthcare at 53%. This escalation is largely attributed to growing access to sensitive data and systems, in addition to expanding attack surfaces.

The growth varies significantly by region, with Asia-Pacific and Japan leading the way, both expecting a 69% increase in insider threats, reflecting heightened awareness of identity-driven attacks. Conversely, the Middle East stands out, with nearly one-third (30%) of organisations anticipating a decrease, possibly indicating confidence in existing security measures or underestimating emerging risks. These regional disparities underscore the complex and evolving nature of insider threats, requiring tailored defence strategies.

AI has become a force multiplier in these threats, enabling attacks to be executed with unprecedented speed and sophistication. The report highlights that AI-driven phishing and social engineering are now among the top three insider threat vectors, with 27% of concerns focused on these tactics. These attacks can adapt in real time, impersonate legitimate communications, and exploit trust at a scale and pace impossible for human adversaries alone.

The challenge is compounded by widespread unauthorised use of GenAI tools, with over three-quarters (76%) of organisations reporting some level of unapproved AI activity. Sectors such as technology (40%), financial services (32%), and government (38%) are most affected. Regional differences are notable, with the Middle East reporting the highest levels of unauthorised GenAI use at 31%, reflecting rapid AI adoption and governance gaps. This convergence of insider access and AI capability creates threats that often evade traditional security controls, demanding more advanced behavioural detection methods.

Despite high levels of awareness, most organisations are falling short in detecting insider threats effectively. While 88% have some form of insider threat programme in place, only 44% utilise user and entity behaviour analytics (UEBA), a critical component for early detection of abnormal activity. Many still rely heavily on identity management, security training, data loss prevention (DLP), and endpoint detection and response (EDR) tools, which offer visibility but lack the behavioural context to identify subtle or emerging risks.

AI adoption in threat detection is widespread, with 97% of organisations employing some form of AI in their security tools. However, governance and operational maturity lag behind, with many AI tools still in pilot or evaluation phases rather than fully operational deployment. Major hurdles include privacy concerns, fragmented toolsets, and difficulties in interpreting user intent, which hinder effective threat identification.

Kevin Kirkwood, CISO at Exabeam, emphasised that AI has added speed and subtlety to insider activity that traditional defence mechanisms cannot keep pace with. He warned that without strong governance and oversight, organisations risk falling behind in this evolving threat landscape. A new approach to insider threat defence — one that incorporates context, real-time detection, and cross-team collaboration — is essential to closing the gap.

To address these challenges, organisations must go beyond compliance and develop strategies that prioritise contextual understanding of activity, accurately distinguish between human and AI-driven behaviour, and foster leadership engagement across functions. Success will depend on reducing detection and response times, narrowing the window of opportunity for insider activities, and continuously adapting strategies as threats evolve.