The impact of AI on security: advocacy vs apprehension in a time of uncertainty

04 July 2024

Brian Martin, head of product development, innovation and strategy, Integrity360

The debate over whether AI is friend or foe continues to rumble away in cybersecurity spheres, further fuelled by the recent rise of generative AI tools such as ChatGPT.

On the one hand, many security professionals have heaped praise upon natural language processing platforms, proclaiming their potential to transform security for the better. From streamlining SOC operations to predicting potential threat and intrusion scenarios and fine-tuning security configurations, many organisations are already exploring the application and potential benefits of AI in security.

On the other, however, extreme concerns are being cited around the threat that AI could be used to democratise cybercrime, enabling attackers to develop and carry out sophisticated attacks more easily and effectively.

Such concerns are only natural. Time and time again we’ve witnessed the relentless exploits of attackers who continue to evolve their attack strategies to bypass or overcome target defences.

Between these two opposing seas of advocacy and apprehension, it’s hard to know which side is right in relation to AI’s use in security.

With the aim of better understanding industry sentiment and ascertaining key arguments from either side of the fence, we explored the debate further through surveying 205 IT security decision makers.

Three key AI concerns among security professionals

In conducting this analysis, three clear key concerns emerged regarding the use of AI in security…

#1 – Worries over deepfake attacks

More than two thirds (68%) of respondents to the survey highlighted their worries about cybercriminals’ use of deepfakes in targeting organisations.

Interestingly, this aligns similarly to a 2022 survey from VMWare, where 66% of respondents revealed that they had seen malicious deepfakes used as part of phishing attacks in the previous 12 months.

The impact of this novel technology being used for nefarious purposes has already been demonstrated, perhaps most famously in a video impersonating Ukrainian President Volodymyr Zelensky falsely requesting that the country’s forces lay down their arms and surrender.

While this example was politically motivated, organisations must be aware of and prepared for similar threats. Indeed, back in 2020, one cybercriminal stole $35 million after using AI to successfully clone a company director’s voice and trick a bank manager.

With AI on the rise, sophisticated attacks such as this will only continue to become increasingly prominent in the coming months.

#2 - Heightening attack volumes

59% also agreed that AI is serving to increase the volume of cyberattacks facing organisations.

Indeed, we’ve already seen instances in which AI is being used offensively. Check Point Research, for example, identified instances where cybercriminals were using ChatGPT to create social engineering attacks and even develop malware code.

At present, the ability of natural language processing tools to create phishing messages is perhaps the area of greatest concern, with threat actors able to accurately mimic the language, tone, and design of legitimate emails to trick their victims.

#3 - Poor understanding of AI

Thirdly, we found that 46% of organisations disagreed with the statement that they do not understand the impact of AI on cybersecurity. Further, the survey also revealed that CIOs appear to have even less comprehension of AI’s impact on cybersecurity, with 42% indicating disagreement with the statement.

This potential gap in knowledge and understanding among key executives is likely to be responsible for much of the stress relating to AI in security, with 61% of respondents having expressed apprehension over the increase in AI.

Similar concerns have been raised in relation to AI taking people’s jobs. However, it’s become clearer that technologies are largely being introduced to help workers, enhancing their efficiency and productivity, rather than replace them.

People are naturally wary of what they don’t understand. Therefore, these figures highlight the importance of education in relation to AI in security to ease potential anxieties.

“73% of security decision makers agree that AI is becoming an increasingly important tool for security operations and incident response, highlighting the belief that AI can be used in both a defensive and offensive manner as a force for good.”

AI is becoming an increasingly important security tool

From the threat of deepfakes and phishing, to the sheer novelty of AI tools, it’s easy to see where AI-related concerns are stemming from in a security context. However, despite these, much of the industry is already showing recognition of AI’s potential to enhance security practices.

According to our survey, 73% of security decision makers agree that AI is becoming an increasingly important tool for security operations and incident response, highlighting the belief that AI can be used in both a defensive and offensive manner as a force for good.

More than 71% of respondents also agree that AI is improving the speed and accuracy of incident response, with key technologies able to analyse vast amounts of data and identify threats in real-time. And 67% also believe that using AI improves the efficiency of cyber security operations – something that can be particularly useful in relation to routine tasks, with automation freeing up staff to focus on more complex and strategic aspects of their work.

Of course, it’s essential to ensure businesses are considering how AI can be used against them and putting processes in place to protect against these growing threats. Without question, threat actors will look to leverage such tools in whatever way they can to get an edge.

However, at the same time, there are clear benefits to be obtained in proactively embracing AI. Indeed, those organisations that do so will be well placed to defend against both traditional and novel attack methods moving forward.