Skip to Content

Whisper Leak: How Encrypted AI Chats Can Still Reveal Your Secrets

Are Your AI Conversations Truly Private?

Get All The Latest to Your Inbox!

Thanks for registering!

 

Advertise Here!

Gain premium exposure to our growing audience of professionals. Learn More

Many people trust that encrypted messaging with AI chatbots is secure, but recent research from Microsoft challenges this assumption. A newly discovered threat, dubbed Whisper Leak, reveals that even conversations protected by robust encryption can leak sensitive details to determined observers.

Unpacking the Whisper Leak Threat

Whisper Leak is a sophisticated side-channel attack that targets the streaming nature of large language model (LLM) responses. When users chat with AI over the internet, their words are protected by Transport Layer Security (TLS). 

However, attackers monitoring the network, whether they’re on the same public Wi-Fi, an ISP, or even a nation-state, can scrutinize how data packets move back and forth. By analyzing the size and timing of encrypted packets, and leveraging machine learning, adversaries can identify specific conversation topics, even without decrypting the actual messages.

  • Streaming LLMs deliver responses in chunks, creating unique traffic patterns.
  • Packet analysis enables attackers to classify topics with surprising precision.
  • This threat persists even though conversation content remains encrypted.

How Effective Is the Attack?

Microsoft’s investigation found that trained classifiers, using tools like LightGBM, Bi-LSTM, and BERT, could pinpoint sensitive topics in encrypted LLM traffic with more than 98% accuracy

This vulnerability affects major AI models, including those from Alibaba, DeepSeek, Mistral, Microsoft, OpenAI, and xAI. While Google and Amazon models fared slightly better, thanks to their approach to batching token responses, they are not completely invulnerable.

The risk is substantial: attackers can deduce if users are discussing delicate issues such as political dissent, health, or financial crimes, just by observing patterns in the encrypted traffic. The longer the monitoring, the sharper the attacker’s insight, raising the stakes for privacy in both personal and professional contexts.

What Can Be Done? Recommended Mitigations

  • After Microsoft disclosed the issue, major LLM providers moved quickly to implement defenses. Adding random-length text to AI responses, for example, helps obscure token length and disrupts the attack’s effectiveness.

  • Microsoft recommends avoiding the discussion of highly sensitive topics with AI chatbots over untrusted networks, such as public Wi-Fi. Users should consider deploying a VPN, choosing non-streaming LLM models, or selecting providers that have patched this vulnerability.

  • For developers, it’s critical to apply stringent security controls, conduct regular red-team exercises, and fine-tune LLMs to withstand emerging side-channel and jailbreak attacks.

Wider Lessons for AI and Cybersecurity

This discovery underscores the evolving risks of AI adoption. Research shows that many open-source LLMs remain vulnerable to adversarial exploits, especially in complex, multi-turn conversations. As generative AI becomes more widespread, organizations must anticipate new operational risks and adopt layered security strategies to counteract them.

The key lesson from Microsoft’s findings: relying solely on encryption is not enough for AI privacy. Developers, enterprises, and end-users alike must stay alert, embrace multiple defensive measures, and adapt to the rapidly changing landscape of AI threats.

Bottom Line

AI chats may feel secure, but side-channel attacks like Whisper Leak prove they are not immune to exposure. To protect sensitive conversations, it’s crucial to combine strong encryption with additional safeguards, trust reputable vendors, and exercise caution when using AI on public or untrusted networks.

Source: The Hacker News


Whisper Leak: How Encrypted AI Chats Can Still Reveal Your Secrets
Joshua Berkowitz November 10, 2025
Views 88
Share this post