
Today, large language models (LLMs) powered by artificial intelligence (AI) tools can be used to develop self-enhancing malware that can bypass YARA rules.
“Generative AI can effectively reduce detection rates by enhancing the source code of small malware variants to evade string-based YARA rules,” Recorded Future said in a new report shared with The Hacker News.
The findings are part of a red team exercise designed to uncover malicious use cases of artificial intelligence techniques that threat actors are already experimenting with to create snippets of malware code, generate phishing emails and conduct reconnaissance on potential targets.

The cybersecurity firm said it submitted a known piece of malware called STEELHOOK, which is linked to the APT28 hacker group, to LLM and submitted YARA rules requiring it to modify the source code to avoid detection. , thus keeping the original functionality intact and generating original code that is syntactically error-free.
Using this feedback mechanism, modified malware generated by LLM can avoid detection of simple string-based YARA rules.
This approach has its limitations, most notably the amount of text the model can handle as input at one time, which makes operating on larger code bases difficult.
In addition to modifying malware to fly under the radar, such AI tools can be used to create deepfakes impersonating senior executives and leaders and conduct influence operations that mimic legitimate websites at scale.
Additionally, generative AI is expected to accelerate threat actors’ ability to conduct reconnaissance of critical infrastructure and gather information that may have strategic use in subsequent attacks.
“By leveraging multimodal models, in addition to aerial imagery, public images and videos of industrial control systems and manufacturing equipment can be parsed and enriched to find additional metadata such as geographic location, equipment manufacturer, model,” the company said. and software version control.”
Indeed, Microsoft and OpenAI warned last month that APT28 uses LL.M.s.

Organizations are advised to carefully review publicly available images and videos depicting sensitive devices and sanitize them if necessary to mitigate the risk of such threats.
This development comes as a group of academics discovered that it is possible to jailbreak LLM-powered tools and produce harmful content by passing input in ASCII art form (for example, “How to Make a Bomb”, where the word BOMB is represented by the characters “*” and a space).
The actual attack, called ArtPrompt, “weaponized LLM’s poor performance at identifying ASCII art to bypass security measures and elicit bad behavior from LLM.”