Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Symantec Warns, AI Makes Cyberattack Easier Than Ever

Symantec Warns, AI Makes Cyberattacks Easier Than Ever Symantec Warns, AI Makes Cyberattacks Easier Than Ever
IMAGE CREDITS: TUDOCELULAR

Security vendor Symantec recently showcased how a large language model (LLM)-powered tool could execute a basic cyberattack with minimal prompt engineering, highlighting the potential future risks of AI-driven cyber threats. The research was detailed in a blog post published on March 12.

As part of its proof-of-concept, Symantec demonstrated that while most known uses of LLMs by attackers have been passive—such as crafting convincing phishing emails or assisting with basic coding—AI-powered agents are now expanding the capabilities of both users and potential threat actors. With the advent of generative AI that can interact with websites and automate tasks, the cyberattack landscape is evolving.

How the AI-Driven Phishing CyberAttack Worked

For this research, Symantec utilized OpenAI’s Operator agent, which was introduced as a research preview for U.S.-based OpenAI Pro users on January 23. The researchers instructed the agent to:

  • Identify a Symantec employee in a specific role
  • Find their email address
  • Create a PowerShell script to collect system data
  • Send the script via a phishing email with a convincing lure

The test subject was Symantec’s principal intelligence analyst, Dick O’Brien, who later discussed the experiment with Dark Reading. Initially, OpenAI’s security guardrails blocked the attempt, flagging it as a violation of privacy and security policies. However, with slight modifications to the prompt—claiming that the email was authorized—the AI agent proceeded.

By leveraging publicly available data and inferring from Broadcom email patterns, the tool successfully determined O’Brien’s email address. It then drafted a PowerShell script and attached it to a fabricated email that appeared to come from IT Support, urging him to execute the script.

Interestingly, before generating the script, the AI agent browsed multiple web pages about PowerShell, seemingly to refine its approach. This ability to research and adapt in real time highlights the growing capabilities of AI-driven cyber threats.

While AI tools like ChatGPT have implemented guardrails to prevent misuse, O’Brien noted a significant concern: AI agents allow users to observe their actions on-screen and even intervene manually when necessary. If a tool encounters a security restriction, a user can override it manually before allowing the AI to continue its task. This raises concerns about how attackers might exploit AI tools to automate malicious activities.

“The prompt engineering was minimal,” O’Brien explained. “We just needed to bypass the security filters and refine the AI’s approach. If someone puts in more effort, they could create far more sophisticated attacks.”

For cybersecurity defenders, the key takeaway is that while AI-assisted attacks may not yet be highly advanced, they significantly lower the barrier to entry for cybercriminals. Attack volume could increase drastically as AI makes cyber threats more accessible to less-experienced actors.

The Future of AI in Cybersecurity

Although AI-driven attacks are still evolving, security professionals should stay vigilant. O’Brien emphasized that while elite cybercriminals currently possess superior malware development skills, AI has the potential to place powerful tools in the hands of far more individuals.

“It’s not just about sophisticated AI threats appearing overnight,” he said. “It’s about the increasing number of cyberattack as AI reduces the effort needed to launch them.”

As AI technology continues to develop, organizations must strengthen their defenses, enhance threat detection strategies, and educate employees about the risks posed by AI-powered phishing and other cyber threats.

Share with others