Google's Threat Intelligence Group recently exposed PROMPTFLUX, a groundbreaking Visual Basic Script (VBScript) malware that uses Gemini AI to regenerate its code dynamically. This capability marks a significant leap in evasion and persistence tactics, forcing security teams to rethink their defenses.
Inside PROMPTFLUX's Self-Modifying Engine
PROMPTFLUX connects directly to Google's Gemini API, using a hard-coded key for continuous requests. Its "Thinking Robot" module crafts specialized prompts, asking Gemini to produce only code that avoids detection and analysis. This just-in-time code regeneration means the malware can stay one step ahead of static antivirus checks, making traditional detection methods nearly obsolete.
The malware's persistence mechanisms are equally sophisticated. PROMPTFLUX stashes its regenerated scripts in the Windows Startup folder and attempts to propagate through removable drives and network shares. While a feature for fully autonomous self-modification is currently disabled, its architecture reveals ambitions for a truly adaptive, metamorphic threat.
LLM-Driven Malware: A Broader Trend
PROMPTFLUX is part of a growing wave of malware using large language models (LLMs) for advanced capabilities. Google's investigation uncovered several other notable examples:
- FRUITSHELL: A PowerShell reverse shell leveraging AI prompts to elude security analysis.
- PROMPTLOCK: Go-based ransomware that dynamically generates malicious Lua scripts via LLMs.
- PROMPTSTEAL (LAMEHUG): Data mining malware used by Russia's APT28, querying Hugging Face models for new attack commands.
- QUIETVAULT: JavaScript-based stealer targeting GitHub and NPM credentials.
These cases highlight how attackers exploit AI not just for efficiency, but for automating tasks like code generation, evasion, and adapting on the fly.
Nation-State Groups and AI Abuse
The use of LLMs extends beyond cybercriminals. Google reports that state-sponsored actors from China, Iran, and North Korea are employing AI for:
- Crafting convincing phishing lures and social engineering campaigns
- Building malicious infrastructure and writing obfuscated code
- Automating reconnaissance and data exfiltration
- Bypassing AI safety measures through creative prompt engineering
For example, Iran's APT41 and MuddyWater use Gemini for covert code development and malware research, often masking their requests as academic inquiries. North Korean groups leverage LLMs to produce deepfakes and craft fraudulent update instructions to steal credentials.
AI as the New Normal in Cybercrime
The shift to AI-powered malware is accelerating. With the accessibility and flexibility of modern AI models, attackers are evolving from sporadic AI use to embedding it at the core of their operations. The combination of low cost, high adaptability, and significant impact makes AI-driven threats especially attractive for scaling up malicious campaigns.
Key Takeaway
PROMPTFLUX marks a pivotal moment in cybersecurity. As malware becomes self-evolving and more autonomous, defenders face an urgent need to adapt. Traditional tools are losing their edge, so organizations must prioritize AI-aware defenses and stay alert to this fast-changing threat landscape.
Source: The Hacker News

AI-Powered Malware: How PROMPTFLUX Is Powering New Cyber Threats