23 May 2025
Phishing remains the top cause of identity breaches, and AI is intensifying this threat. Cybercriminals are now capable of crafting highly convincing tailored videos, voice recordings, and text-based scams that deceive even vigilant users. CyberArk’s research found that nearly 7 out of 10 (66%) UK organizations experienced successful phishing attacks last year, including those using AI-driven deepfake scams; and more than one-third (35%) of these organizations fell victim multiple times.
Conversely, AI is also revolutionizing cybersecurity defenses. Its ability to analyze threats in real-time, automate responses, sift through vast amounts of data, and handle routine tasks is empowering security teams to focus on strategic initiatives. The report states that an impressive 87% of UK organizations are leveraging AI and Large Language Models (LLMs) as part of their identity security strategies; 59% utilize AI for advanced identity verification; and nearly half (49%) consider AI and LLM adoption to be the leading driver of their cybersecurity investments this year, recognizing AI as one of the most impactful tools in reducing identity-related threats.
However, the widespread integration of AI introduces new risks. Security leaders are working hard to keep pace with their organization’s expanding AI footprint, which is now pervasive among everyday business users. The study reveals that 72% of employees regularly use AI tools for work tasks; machine identities significantly outnumber human identities, with over half (61%) possessing sensitive or privileged access; there are roughly 100 machine identities for every human user in UK organizations; and more than half (59%) lack sufficient identity security controls for these machine identities.
Moreover, outside the oversight of IT and security teams, AI adoption is accelerating unchecked. While 75% of decision-makers claim they have effective control over AI tools used by employees, 36% admit to using AI tools that aren’t fully approved or managed by IT. Almost half (45%) acknowledge they cannot secure or manage all ‘shadow AI’ tools currently in use.
The emergence of AI agents—autonomous machine identities with human-like reasoning — is another critical challenge. In fact, 61% of UK organizations see the manipulation of AI agent behavior by unauthorized access as a top concern.
“Security teams are being pulled in all directions—defending against external and internal AI threats while leveraging these technologies to strengthen defenses,” says David Higgins, Director of the Field Technology Office at CyberArk. “Easing this strain is essential to prevent major cyber incidents. AI integration isn’t a race; it’s a security challenge. Organizations in both the public and private sectors must prioritize identity security to achieve AI leadership. Attackers are already exploiting AI at scale, so security strategies must evolve to defend against this expanding, AI-driven attack surface — whether for innovation or protection.”