Current approaches can’t mitigate the AI cybersecurity threat. What can?

07 February 2024

Adam Maruyama, field CISO, Garrison

A recent National Cyber Security Centre (NCSC) report represents a much-needed shift in the AI security dialogue from the very real implications that AI has for information operations and intellectual property rights. The report looks to the near future when AI will make it easier for hackers to identify targets, trick them into opening malicious content, and deploy harder-to-detect malware and ransomware into target networks. By examining each of these steps in the attack chain in sequence, it becomes clear that even supercharged versions of current approaches will be insufficient to counter this threat. Instead, proactive security that leverages robust technical controls to move risk away from users and outside the network are necessary.

Reconnaissance and social engineering: making it easier to trick users
The first two capabilities that AI will enhance for hackers are reconnaissance and social engineering – meaning that hackers will have an easier time identifying their targets and have generative AI’s help in creating content that is compelling to them. For example, a hacker could use an AI research assistant to identify targets of interest at a company, enumerate their interests and contacts, and craft targeted phishing emails for each of them impersonating trusted members of their network or an announcement from their children’s school.

Companies are already struggling with the trust and morale implications of phishing simulations, which seem to be the security method of choice to counter phishing emails. It’s hard to conceive that sending even harder-to-detect phishing training emails to users will increase the rate of success, and even harder to conceive that generating tailored phishing campaigns on a per-user basis won’t have severe privacy concerns that have legal, ethical, and trust implications for employers and employees alike.

Tools and exfiltration: evading traditional technical countermeasures
The predicted uplifts that AI will provide to the tools and exfiltration capabilities, particularly for highly capable state actors, mean that technical exploits like ransomware or other malware will have a better chance of taking hold in a target network, and adversaries will be more likely to be able to extract sensitive data from those networks without being detected. Adversaries will also be able to generate a greater number of unique signatures for their malware, making ‘crowdsourced’ signature-based threat intelligence even more difficult.

Faced with a more protean technical threat, it’s clear that current detection and response technologies are not up to the task. Adversaries will be able to leverage AI capabilities to better hide their ingress points to the network, and there’s already evidence that advanced nation state APTs like VOLT TYPHOON are focusing on low-profile, living off the land (LotL) tactics to avoid being kicked out of critical infrastructure networks. Even if similar AI algorithms to the ones attackers are using can be leveraged on the defensive side of cybersecurity – as they already are by numerous vendors – AI makes it more likely that a few well-crafted attacks would make it into target networks and more effectively be able to either remain in place or identify and exfiltrate target data.

Toward a more proactive security model
The common thread running through current cybersecurity approaches, and what renders them ineffective against the high-volume, highly-adaptable threat posed by AI-driven attacks, is their reliance on detecting that content is malicious before doing anything about it. Creating a security model that is resilient against this threat requires inverting this model and trusting only sites and code that has been scrupulously reviewed and treating all other content as inherently risky and potentially compromised.

A prime example of such an approach is the use of robust remote browser isolation (RBI) technology to shield users from the browser-based risks of phishing and ransomware. Remote browser isolation pushes processing of non-trusted websites outside customers’ network perimeter, rendering endpoints safe from technical exploits while providing users with real-time awareness that they are interacting with a non-trusted environment. Hence, users and endpoints are protected from the potential exploits behind their ‘first click,’ and users are reminded of the risk before entering data into what could be a spoofed site designed for credential harvesting.

By isolating the vast majority of the 1.1 billion websites on the Internet from corporate networks, RBI can significantly mitigate the risk of AI-based attacks that would otherwise overwhelm the human and software-based detection algorithms that are the bedrock of anti-phishing and endpoint security today. Turning to develop similar technologies that treat more and more content as untrusted, rather than trying to mitigate the effects of a ‘trust by default’ network security model, provides a much more robust alternative to engaging in an offensive vs defensive ‘arms race’ using AI capabilities.