ChatGPT – friend or foe?

08 May 2023

The launch of ChatGPT signals the dawn of a new era for cybersecurity – but will that era be good or bad? The verdict is out, reports Amy Saunders

It’s broadly agreed that AI and ML presents a double-edged sword, with many organisations reporting mixed feelings about the technology.

For cybersecurity professionals, AI is a powerful instrument that expedites and improves processes like automated security processing and threat detection, reports Matt Aldridge, principal solutions consultant, OpenText Cybersecurity. “However, we must remember that bad actors have the very same toolsets available for their criminal activity. It is proving to be a constant cat-and-mouse game between these two parties.”

Bad actors have always moved with the times, if not ahead of them - and AI is no exception. From AI-generated phishing emails to pattern detection and malware, AI threats are becoming increasingly sophisticated; however, AI is also a handy tool incorporated into many modern cybersecurity solutions. “On balance, it is currently perceived as more of a threat until it is fully leveraged by all organisations,” says Alan Hayward, sales & marketing manager at SEH Technology.

“As these tools become more advanced, and we as defenders learn to use them in new and innovative ways, so too will attackers. Nearly all innovation is dual-use technology,” explains Jonathan Hencinski, VP, security operations, Expel.

“The key question is where the balance of advantage will ultimately lie. AI is already being deployed by both network defenders and those attacking them. The strategic issue for the community is where that advantage will fall in the long term,” asserts Will Dixon, global head of the academy and community, ISTARI.

Enter ChatGPT

The launch of ChatGPT in November 2022 made huge waves. An advanced form of AI developed by OpenAI, ChatGPT is a language model that can understand natural language and generate text that is indistinguishable from human writing.

Significant concerns have been raised, including potential malicious use by hackers or authoritarian governments. Bad actors can use ChatGPT and other AI writing tools to make phishing scams more effective. Traditional phishing messages are often easily recognisable because they are written in clumsy English, but ChatGPT can fix this, explains Florian Malecki, executive vice president of marketing, Arcserve. “Mashable tested ChatGPT’s ability by asking it to edit a phishing email. Not only did it quickly improve and refine the language, but it also went a step further and blackmailed the hypothetical recipient without being prompted to do so.”

Threat actors can use technology like ChatGPT to automate convincing spear phishing emails, reports Corey Nachreiner, CSO at WatchGuard. Singapore’s Government Technology Agency demonstrated this a few years ago, and recently we’ve seen members of a popular underground forum use ChatGPT to write data-stealing malware. “In the future, we also expect to see threat actors leverage adversarial ML to combat the ML algorithms used in security services and controls.”

“AI bots like ChatGPT pose cybersecurity threats for several reasons; not only can they aid in social engineering attacks but can also help develop code that can be used to inform cyberattacks,” says Michael Lakhal, director of product management, product strategy, OneSpan. “Take the ability for an AI bot to replicate written prose. Being able to parse through countless examples of an individual’s writing style online means a sophisticated AI system could convincingly replicate how a specific person writes. This opens up huge potential for phishing attacks, with emails pretending to be from certain people or businesses becoming almost imperceptible to the average person.”

With AI making it easier to create malicious code at scale, exposure to cybercrime has significantly increased. Malecki says that, while the number of security tools available to protect the enterprise may be increasing, these tools may not be able to keep pace with emerging AI technologies that could increase your vulnerability to security threats.

“The constantly evolving cybersecurity industry can be compared to the Lernaean Hydra, one threat is mitigated to have three newer ones emerge!” agrees Lakhal. “And the easily accessible AI systems won’t help in containing this aspect, allowing hackers to keep finding new ways of developing their attacks.”

Check Point Research reported that, within weeks of ChatGPT’s release, individuals in cybercrime forums, including those with limited coding skills, utilised it to create software and emails for espionage, ransomware attack, and malicious spamming. “Check Point said it’s still too early to tell if ChatGPT will become the go-to tool among Dark Web dwellers,” reports Malecki. “Still, the cybercriminal community has demonstrated a strong interest in ChatGPT and is already using it to develop malicious code.”

British security agency GCHQ has also recently identified ChatGPT and other AI chatbots as an emerging security threat to sensitive information. “Enterprises can protect against ill-intended actors by implementing clear information security policies and rules on the use of AI-powered chatbots, especially where sensitive data is involved and, in some cases, an outright ban might be the most sensible option to safeguard data until the threat is better understood,” asserts Hayward.
Indeed, at the end of March, Elon Musk and 1,000 AI experts prepared an open letter calling for a six month pause in developing systems more powerful than ChatGPT-4, OpenAI’s latest iteration, citing potential risks to society.

“Should we let machines flood our information channels with propaganda and untruth?... Should we develop non-human minds that might eventually outnumber, outsmart, obsolete, and replace us?... Such decisions must not be delegated to unelected tech leaders,” reads the letter.

It’s not all doom and gloom for the role of AI bots in cybersecurity, however, and enterprises can stand to benefit greatly from incorporating it into their networks.

“Firstly, it can help to improve threat detection capabilities by spotting threat patterns that human analysts often miss,” says Hayward. “Secondly, it improves efficiency by analysing data patterns faster than humans, allowing for faster breach detection. Finally, AI bots can monitor networks at all times, even when humans are unable to, providing a more comprehensive level of security continuously.”

“Not only did it quickly improve and refine the language, but it also went a step further and blackmailed the hypothetical recipient without being prompted to do so”

Indeed, AI has real potential to enhance the speed, precision, and impact of operational defence, and support organisational resilience.

“AI is already being used to support the security community by enhancing and scaling process-heavy tasks typically performed by analysts, such as first-response incident triage,” says Dixon. “AI defence is becoming deeply integrated into defensive responses within the cybersecurity ecosystem. Ultimately, how these technologies are adopted is a question of maturity and resource. There are levels of sophistication for defenders using AI - improving security posture, dynamic threat detection, proactive defence, response, and recovery and ultimately, attribution.”

“Imagine a security chatbot that inspects your security controls and configurations and points out gaps recommending policies or defenses,” explains Nachreiner. “In the future, AI systems will help audit, assess, and validate our security controls. ChatGPT’s natural language processing means that we may have security chat bots advising security professionals in the future.”

“For cybersecurity organisations, using AI is no longer an optional improvement, but an absolute necessity,” explains Aldridge. “Considering the rise of AI-enhanced cyberattacks, the only way to maintain enterprise security is by incorporating AI into threat recognition systems to cope with the increasing sophistication and intelligence of cybercriminal techniques. You must fight fire with fire – or risk being left behind.”

An answer to the skills shortage? In a word, no…

The impact of AI bots on IT employees is yet to be fully understood, but we should expect significant changes in roles and staffing levels, says Hayward: “repetitive tasks will be less of a priority for employees in the future and roles will change to have a great understanding of AI technologies and leverage human skillsets to drive forward future creativity and innovation.”

Aldridge agrees that AI-enabled cyber tools are already reducing the burden of repetitive workload. “SOC teams will be increasingly enabled to focus on the most highly threatening, targeted events while AI-enabled solutions attend to the daily grind of repeated, unremarkable breach attempts and internal user errors.”

“Eventually, AI may get good enough that we see staffing reductions everywhere, including security,” says Nachreiner. “These AI/ML systems are good at separating the wheat from the chaff for security indicators and alerts, but you still need human incident responders to investigate the remaining highlights. However, as AI improves, even that may not be the case for long.”

The skills profile for IT talent is evolving in the wake of these new AI/ML developments, and potentially not for the better. “We will have an increasingly sophisticated profile for cyber talent, with a focus not just on the technical but a strategic leader who can orchestrate complex technology and business processes that can match the cyber arms race with AI-enabled attackers,” says Dixon.

Aldridge agrees: “organisations now need professionals who are proficient in, and knowledgeable about, these new tools on top of the usually required cybersecurity skills. This trend is also revealing the unsustainability of mainstream hiring practices and the impossibly high demands most companies have. LinkedIn job descriptions and company announcements are now requiring that professionals have up to two years of ChatGPT experience, when the tool has only been available for a few months!”

Tools such as ChatGPT have brought about a monumental paradigm shift - however, it is unlikely that this technology could ever fully replace humans. Cybersecurity employees bring with them empathy and emotional intelligence, allowing them to understand psychology and the human errors that often lead to cybersecurity incidents.
“Humans can also contextualise and see the bigger picture around an event, something that is not easy for a machine to do, however smart it might be,” says Hayward. “Furthermore, humans can make more ethical decisions by prioritising morality and human values over pure data and logic.”

“Despite the rapid advances in AI technology, it will never replace the critical and creative thinking that human analysts bring to the cybersecurity sector,” reports Hencinski. “We will see both attackers and defenders leverage these technologies, and it will up the tempo and scale of the attacks we see. But in the end, humans will be in the loop on both sides, just leveraging different aspects of the new tools at their disposal.”

“ChatGPT and AI tools work based on deep neural networks, and can only respond on a predictive, not a proactive basis: without the initial human input, they would not be able to function on their own,” explains Aldridge. “Crucially, they also lack the critical thinking and creativity that human brains have, the innovative power that drives novelty and growth in every sector, including cybersecurity.”