HomeIndustriesAI-driven ‘synthetic cancer worm’ poses latest threat to the Internet

AI-driven ‘synthetic cancer worm’ poses latest threat to the Internet

Researchers have discovered a brand new sort of computer virus that takes advantage of the facility of enormous language models (LLMs) to stay undetected and spread.

This “synthetic cancer,” as its developers call it, represents a possible latest era of malware.

David Zollikofer of ETH Zurich and Benjamin Zimmerman of Ohio State University developed this proof-of-concept malware as a part of their submission for the Swiss AI Safety Prize.

Its origins, described intimately in Preprinted paper entitled “Synthetic Cancer – Augmenting Worms with LLMs” demonstrates the potential of AI to develop latest, sophisticated cyberattacks.

Here is an in depth description of how it really works:

  1. installation: The malware is initially delivered as an email attachment. Once executed, it might probably download additional dependencies and potentially encrypt the user's files.
  2. Reproduce: This phase leverages GPT-4 or similar large language models. The worm can interact with these AI models in two ways: a) By making API calls to cloud-based services akin to OpenAI's GPT-4. b) By running a neighborhood LLM (which could also be common in future devices).
  3. GPT-4 usage: Zollikofer told New Scientist“We ask ChatGPT to rewrite the file, preserving the semantic structure, but changing the variable naming and changing the logic somewhat.” The LLM then generates a new edition of the code with modified variable names, restructured logic, and possibly even different coding styles, while preserving the unique functionality.
  4. distribution: The worm scans the victim's Outlook email history and passes that context to the AI. The LLM then generates contextually relevant email responses, complete with social engineering tactics designed to trick recipients into opening an attached copy of the worm.

As we will see, inside two days the virus uses AI: to create code to copy itself and to jot down phishing content to spread further.

The ability of the “synthetic cancer worm” to rewrite its own code poses a very big problem for cybersecurity experts, because it could render traditional signature-based antivirus solutions obsolete.

“The attack side currently has some benefits because there’s more research on it,” notes Zollikofer.

In addition, the worm's ability to create highly personalized and contextually relevant phishing emails increases the likelihood of future successful infections.

This happened just months after the same AI-powered worm was reported in March.

Researchers led by Ben Nassi of Cornell Tech create a worm that might attack AI-powered email assistants, steal confidential data, and spread to other systems.

Nassi's team focused on email assistants based on OpenAI's GPT-4, Google's Gemini Pro, and the open source model LLaVA.

“It will be names, phone numbers, bank card numbers, social security numbers, anything that is taken into account confidential,” Nassi told Wiredwhich underlines the potential for large data breaches.

While Nassi's worm primarily targeted AI assistants, Zollikofer and Zimmerman's creation goes a step further by directly manipulating the malware's code and creating convincing phishing emails.

Both are potential future opportunities for cybercriminals to leverage widely used AI tools for attacks.

There are growing concerns concerning the cybersecurity of AI

These have been turbulent days for cybersecurity within the age of artificial intelligence. Data theft at Disney by a hacktivist group.

Recently, OpenAI was suspended because there was a knowledge theft in 2023 that they desired to keep secret.

Zollikofer and Zimmerman have taken several security precautions to forestall misuse. Among other things, they don’t share the code publicly and deliberately leave certain details vague of their article.

“We are fully aware that this document represents a sort of malware with high potential for abuse,” the researchers explain of their publication. “We are publishing this in good faith and in an effort to boost awareness.”

Meanwhile, Nassi and his colleagues predicted that AI worms could spread within the wild “in the following few years” and “may have significant and undesirable consequences.”

Given the rapid progress we’ve seen in only 4 months, this timeline seems not only plausible but even perhaps conservative.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read