01 April 2025
Cato Networks has released its 2025 Cato CTRL™ Threat Report, highlighting a striking discovery: a threat intelligence researcher managed to exploit multiple popular generative AI (GenAI) tools — including DeepSeek, Microsoft Copilot, and DeepAI’s ChatGPT — to create malware capable of stealing login credentials from Google Chrome, without any prior coding experience in malware development.
The researcher accomplished this by crafting a meticulous fictional narrative in which each GenAI tool was assigned specific roles and tasks within a constructed scenario. By employing this method of ‘narrative engineering,’ the researcher successfully circumvented security controls intended to prevent such activities, effectively normalizing restricted operations. This innovative LLM (Large Language Model) jailbreak technique has been termed ‘Immersive World.’
“Infostealers play a significant role in credential theft by enabling threat actors to breach enterprises,” said Vitaly Simonovich, a threat intelligence researcher at Cato Networks. “Our new LLM jailbreak technique, which we’ve uncovered and called Immersive World, showcases the dangerous potential of creating an infostealer with remarkable ease. The emergence of the zero-knowledge threat actor poses a high risk to organizations because the barrier to malware creation has significantly lowered with GenAI tools.”
The report underscores a critical concern for Chief Information Officers (CIOs), Chief Information Security Officers (CISOs), and IT leaders alike: the increasing accessibility of cybercrime. The rise of the zero-knowledge threat actor marks a fundamental shift in the cybersecurity landscape, demonstrating how virtually anyone, equipped with readily available tools, can potentially execute attacks against enterprises. This reality highlights the urgent need for proactive and comprehensive AI security strategies.
“As the technology industry focuses intensely on GenAI, it becomes evident that the associated risks are as substantial as the potential benefits,” said Etay Maor, chief security strategist at Cato Networks. “Our report details the new LLM jailbreak technique that should have been blocked by the guardrails of GenAI. Its failure to do so allowed the weaponization of ChatGPT, Copilot, and DeepSeek. Our findings aim to elevate awareness about the dangers linked to GenAI tools, emphasizing the necessity for improved safeguards to prevent their misuse.”