Skip to Content

Critical Flaws in Google Gemini AI Expose New Security Risks

When Trustworthy AI Turns Into a Security Weakness

Security researchers uncovered a major flaw in Google’s Gemini AI suite, demonstrating how even industry-leading AI can become a risk vector for privacy breaches and data theft. Cybersecurity experts at Tenable identified and reported three separate but related vulnerabilities in Google Gemini, collectively known as the Gemini Trifecta. 

Inside the "Gemini Trifecta": Three Unique Vulnerabilities

Each flaw targeted a different aspect of the AI ecosystem, revealing unique paths for exploitation:

  • Prompt Injection in Gemini Cloud Assist: Here, attackers could embed malicious prompts within HTTP headers during log summarization. If exploited, this would let them manipulate AI-powered cloud resources, triggering unauthorized tasks using Gemini’s permissions in services like Cloud Functions and Compute Engine.

  • Search-Injection in Gemini Search Personalization: By tampering with a user’s Chrome search history using crafted JavaScript, attackers could trick Gemini into processing injected prompts as real queries. This loophole exposed sensitive saved data and even users’ location information.

  • Indirect Prompt Injection in Gemini Browsing Tool: This vulnerability allowed threat actors to siphon private data by abusing Gemini’s web page summarization feature. If Gemini fetched a malicious page, it could inadvertently transmit personal information to an external, attacker-controlled server.

How Exploits Happen And Why They’re So Difficult to Catch

The most dangerous scenarios involved AI being deceived into gathering confidential data or scanning outside its intended boundaries. For example, an attacker could instruct Gemini to collect asset lists or configuration details and stealthily transmit them through hyperlinks all without the user's awareness or visible cues, since no links or images needed to be rendered.

Some attacks did require user interaction, such as visiting a compromised site that poisoned browsing history. Once this occurred, any subsequent use of Gemini’s search personalization could trigger the hidden prompts, leading to unintended data leaks.

Rapid Response: Google’s Mitigations and Lessons Learned

Once notified, Google reacted swiftly with patches. They disabled hyperlink rendering in log summaries and introduced new protections to block prompt injections. While these actions addressed the immediate risks, the incident underscores a larger lesson: AI isn’t just a target, but also a tool for attackers.

Wider AI Security Concerns

Industry experts caution that as businesses embrace AI, the threat landscape broadens. AI tools often have deep access and automations that can be exploited in ways that traditional security protocols don’t anticipate. Similar vulnerabilities have surfaced in other platforms, like Notion’s AI agent, where prompt injection and task chaining enabled attackers to bypass permissions and exfiltrate sensitive data.

To stay protected, organizations must gain visibility into where AI tools operate, what data they touch, and how prompts, direct or indirect, might be weaponized. Policy enforcement and continuous monitoring are key to managing these new risks.

Key Takeaway: Stay Proactive as AI Advances

The story of Google Gemini’s vulnerabilities is a clear warning: AI security is a moving target. It demands constant vigilance, rapid patching, and a thorough understanding of how AI systems can be misused. As artificial intelligence becomes woven into the fabric of business operations, only a proactive approach can protect user privacy and valuable data assets.

Source: The Hacker News


Critical Flaws in Google Gemini AI Expose New Security Risks
Joshua Berkowitz October 1, 2025
Views 1320
Share this post