Skip to Content

Why Securing Remote MCP Servers Is Critical in the Age of AI

AI Integration Brings Powerful Capabilities—and New Security Risks

As organizations deploy advanced AI systems that depend on remote Model Context Protocol (MCP) servers, a new era of security challenges emerges. These servers enable seamless integration with external databases, APIs, and tools, empowering AI agents to deliver richer functionality. 

However, this same flexibility dramatically expands the potential attack surface, demanding a proactive security posture. Below we take a look at the recent Google Cloud advice on mitigating MCP server attack surfaces and best practices for server security.

Understanding the Risks of Remote MCP

According to Google Cloud, leveraging MCP creates vulnerabilities such as tool poisoning, prompt injection, dynamic tool manipulation, session hijacking, unauthorized access, and data exposure

When AI agents interact with multiple external resources, even minor misconfigurations or weak controls can create openings for sophisticated cyberattacks. These threats require more than traditional security approaches.

The Centralized Proxy Solution

To address these challenges, Google Cloud recommends implementing a centralized MCP proxy. This security layer, deployable on platforms like Cloud Run, Apigee, or GKE, acts as the single point of enforcement for vital controls:

  • Access control to ensure only authorized users and systems can interact with MCP servers

  • Audit logging for capturing detailed records of all activities

  • Secret and resource-use policies to automate encryption, key management, and usage limits

  • Real-time threat detection to spot and disrupt suspicious behavior immediately

This architecture streamlines security policy management and allows organizations to scale AI integrations without compromising their defenses.

Key Deployment Risks and How to Prevent Them

Google’s guidance highlights five major risks that need to be systematically addressed:

  • Unauthorized tool exposure from misconfigured manifests
  • Session hijacking that threatens user and agent sessions
  • Shadow tools with malicious endpoints masquerading as legitimate ones
  • Token theft and sensitive data leaks via insecure channels
  • Weak authentication controls that attackers can circumvent

By centralizing enforcement at the proxy, organizations can efficiently mitigate these issues, ensuring consistent protection across all MCP interactions.

Identity, Transport, and Policy: The Fundamentals

Securing identity, transport, and architecture is non-negotiable in modern AI environments. Relying on decentralized, server-by-server controls is no longer viable. The centralized proxy model provides stronger security, offers robust observability, and simplifies governance with key benefits for organizations looking to innovate quickly without exposing themselves to unnecessary risk.

How Other Cloud Providers Address MCP Security

While Google’s approach is tailored for MCP, AWS and Azure promote similar best practices:

  • AWS employs Session Manager, IAM, and VPC endpoints for access control, logging, and auditing, despite lacking guidance specific to MCP.

  • Azure utilizes Azure Arc and Connected Machine Agent, enforcing robust RBAC and identity authentication, with remote access disabled by default.

The consensus: centralize access controls, avoid direct server exposure to the internet, and ensure comprehensive audit trails.

Modern Security for Modern AI

The era of perimeter-only security is over. Google Cloud’s strategy for securing MCP servers illustrates how cloud-native security must adapt: centralize enforcement, prioritize identity, automate auditing, and address the unique risks of AI-powered systems. By embracing these practices, organizations can unlock AI’s potential, securely and at scale.


Why Securing Remote MCP Servers Is Critical in the Age of AI
Joshua Berkowitz October 25, 2025
Views 484
Share this post