Security company Wiz is noting that cyber attackers are rapidly evolving and are now embedding artificial intelligence directly into malware payloads. This marks a significant departure from using AI solely for generating phishing campaigns or automating attack planning.
Today, some malware invokes AI models in real time, adapting its behavior and executing dynamically generated commands on compromised systems. This trend introduces new challenges for defenders and signals a major shift in adversarial tactics.
Recent Incidents: AI at the Core of Malware
LameHug: Command Generation via AI
In July 2025, Ukraine’s CERT uncovered the LameHug malware. This threat sent base64-encoded prompts to HuggingFace’s large language models (LLMs), instructing them to generate Windows commands for system reconnaissance and data exfiltration. The malware encoded its prompts to evade detection, but the LLMs still processed them, returning actionable instructions for the attacker to execute on the host machine.
Amazon Q Developer Extension: Destructive Payloads
That same month, attackers compromised the Amazon Q Developer Extension in Visual Studio Code. They embedded a payload that leveraged AI agents to orchestrate destructive operations, such as mass file deletion and cloud resource removal. Although implementation errors prevented success in customer environments, the attack highlighted how AI can be used for large-scale system manipulation. The attack vector included exploiting GitHub Actions and a novel breach of Amazon CodeBuild.
s1ngularity: Supply Chain Attacks Enhanced by Prompt Engineering
August 2025 saw malicious Nx build system packages, called s1ngularity, uploaded to npm. These packages included prompts for various AI models including Claude, Gemini, and Q, tasking them with searching for wallet files and documents containing secrets. Attackers repeatedly refined their prompts to bypass LLM guardrails, demonstrating a willingness to experiment with AI’s boundaries for sensitive data discovery.
PromptLock: Local AI for Ransomware
PromptLock, initially thought to be the first AI-written ransomware, was later revealed as an academic project. It ran a local LLM to analyze victim files and generate customized ransom notes. By operating the AI model locally, the malware avoided the oversight and audit trails of cloud-based models, setting a precedent for future threats that sidestep traditional detection mechanisms.
Analysis: The Promise and Pitfalls of AI-Invoking Malware
While these attacks mark a technical leap, most did not achieve outcomes that couldn’t be replicated with traditional, pre-generated code. In many cases, relying on LLMs at runtime introduced unpredictable results and operational failures.
Invoking cloud-based AI services also left network footprints, and LLM guardrails often blocked malicious requests. However, these setbacks haven’t deterred attackers from exploring AI integration further.
Why might criminals persist with AI-invoking malware? Consider the following:
- Evasion: AI-generated commands are dynamic, making detection via static signatures more difficult.
- Trust: AI tools may appear legitimate on victim systems, helping attackers blend in.
- Novelty: There’s a strong experimental drive as threat actors probe the frontiers of AI capabilities.
Additionally, embedded API keys present a new detection opportunity, as researchers can trace and revoke access. The future may see adversaries deploying fully autonomous AI agents within malware, capable of adapting, spreading, and making real-time decisions, raising the stakes for defenders.
Defending Against Adaptive AI Threats
AI-invoking malware is still in its infancy, with most attacks so far being ineffective or experimental. However, the trajectory is clear: defenders must prepare for threats where code is generated and executed in unpredictable ways on the host. Organizations must ensure that only trusted sources can operate AI tools, while reinforcing fundamental security measures. As the threat landscape evolves, security teams and innovators must develop adaptive defenses to stay ahead.
Source: Wiz Blog
AI-Invoking Malware: The Evolution of Cyber Threats