mcp-agent from LastMile AI is a Python framework that connects the dots between the low-level Model Context Protocol and high-level, production-friendly agent patterns. It manages MCP server lifecycles, exposes unified tools and resources to models, and implements the agent patterns popularized by Anthropic’s Building Effective Agents, all in a model-agnostic way.
mcp-agent bridges the gap between the low-level Model Context Protocol and high-level agent patterns. It handles the complex mechanics of managing MCP server connections, tool orchestration, and workflow patterns so you can focus on building your application logic. (Docs, 2025)
Key features
- MCP-native: Purpose-built for the protocol, with first-class support for tools, prompts, and resources (Model Context Protocol, 2024).
- Composable patterns: Parallel, Router, Intent Classifier, Orchestrator-Workers, Evaluator-Optimizer, plus a model-agnostic take on OpenAI Swarm (OpenAI, 2025).
- Simple Agent API: Define an
Agent
with a purpose and accessible MCP servers; the framework exposes those as tool calls to your LLM. See src/mcp_agent/agents/agent.py.- Augmented LLM: A base interface that adds memory, tool use, and tracing around any provider. See src/mcp_agent/workflows/llm/augmented_llm.py.
- Model-agnostic: Works with OpenAI, Anthropic, Azure, Bedrock, Google, and more via optional extras in pyproject.toml.
An Agent for Your MCP Client
As MCP gains adoption, more services expose tools, prompts, and resources through a shared protocol. That is powerful, but working directly with raw MCP connections, tool schemas, and multiple servers adds orchestration complexity.
mcp-agent solves this by:
- Managing MCP server connections reliably
- Presenting a clean Agent abstraction with tools available to an LLM as function calls
- Providing composable workflow patterns like Parallel, Router, Orchestrator-Workers, and Evaluator-Optimizer based on Anthropic’s guidance (Anthropic, 2024).
Why I like it
The framework is small and pragmatic. You write normal Python, attach an LLM provider of your choice, and gain mature patterns without a heavy graph UI or hidden magic.
It also embraces MCP’s interoperability, so any MCP server instantly becomes a tool in your agent’s toolbox. The docs reinforce this simplicity with code-first examples and a friendly mental model (Docs, 2025).
import asyncio
from mcp_agent.app import MCPApp
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM
app = MCPApp(name="finder_agent")
async def main():
async with app.run():
# Expose MCP servers as tools to the LLM
finder = Agent(
name="finder",
instruction="Read files or fetch URLs and return the content.",
server_names=["filesystem", "fetch"],
)
async with finder:
llm = await finder.attach_llm(OpenAIAugmentedLLM)
text = await llm.generate_str("Show me what's in README.md")
print(text)
if __name__ == "__main__":
asyncio.run(main())
Under the hood
The codebase is Python-first, distributed on PyPI, and organized around clear primitives. MCPApp
supplies global context and configuration. Agent
binds instructions and a set of MCP servers to an LLM. AugmentedLLM
adds generation helpers, memory, tool calls, and tracing.
The project implements patterns like Parallel and Orchestrator as LLMs themselves, which makes them easy to chain and compose. I suggest starting with the README, and browse the examples for runnable apps.
Notable dependencies in pyproject.toml include mcp
for protocol types and clients, pydantic
and pydantic-settings
for config, instructor
for structured outputs, opentelemetry
for tracing, and optional extras for providers like OpenAI and Anthropic.
Telemetry hooks are woven through the LLM and agent calls so you can observe tool selection, inputs, and outcomes.
Use cases
The repository ships with concrete applications. A Claude Desktop demo wraps an mcp-agent app as an MCP server so Claude can call your workflows directly.
A Streamlit Gmail agent uses an MCP server for Gmail to read, send, and triage emails.
There are RAG examples using Qdrant via MCP, and a Marimo notebook variant for reactive UIs.
Each scenario shows the same pattern: define the agents and servers, then let your LLM select tools and iterate. These mirror the patterns recommended by Anthropic for real products (Anthropic, 2024).
Community and contribution
mcp-agent is Apache 2.0 licensed and active. The project welcomes issues and pull requests, with a friendly CONTRIBUTING.md and a Discord for discussion. The examples double as end-to-end tests, and the maintainers encourage adding examples alongside features. Stars and forks suggest meaningful adoption in the MCP ecosystem.
Usage and license
Install with uv add "mcp-agent"
or pip install mcp-agent
, then run one of the examples by copying the sample env or YAML secrets. The license is Apache 2.0, which permits wide use, modification, and redistribution, provided you include attribution and a copy of the license. It also grants a patent license from contributors and disclaims warranties. See the LICENSE for full terms.
Impact and what is next
MCP is quickly becoming the USB-C of AI integrations, and mcp-agent gives developers a straight path from that protocol to working, observable agents. Because patterns are small and composable, it is easy to start simple and grow to orchestrated, multi-agent systems without rewriting your stack.
If you live in the OpenAI ecosystem, the Swarm pattern will feel familiar, but here it is provider-agnostic and ready to pair with MCP servers (OpenAI, 2025). Expect deeper durable execution, long-term memory, and streaming to keep improving.
The mcp-agent cloud platform will be the first avenue of commercializing the popular repository, providing users with a simple way to deploy and scale their mcp agents. This is primarily for users who don't want to run it locally that is. The idea is that users can deploy an agent for tasks like podcast management, and it will run seamlessly as an MCP server, connecting to any client (like OpenAI, Anthropic, etc.) from the cloud.
About LastMile AI
LastMile AI is an enterprise-grade evaluation platform that provides the essential tools to enable developers to test, evaluate, and benchmark AI applications. They specialize in deploying their evaluation framework in regulated industries such as banking and insurance companies.
Last Mile Devs raised their seed round nearly 2 years ago from Google's AI fund but have been successful in monetizing their evaluation platform and have refrained from seeking additional funding at this point.
LastMile AI also builds tooling to help teams harness generative AI, with open source repos like mcp-agent and AIConfig, and a focus on practical developer experience. Learn more on their GitHub org and site: github.com/lastmile-ai and lastmileai.dev.
Why Was mcp-agent Created?
The MCP agent project was created as a side project by the co-founder, Sarmad Qadri, during the Christmas break. Banks and insurance companies, their primary customers for their LastMile evaluation platform, closed down for the holidays, while startups remained active.
Serod saw a post about the need for MCP (before it was officially launched) and, being excited about it due to his background in language server protocols (LSP) at Microsoft, decided to build a lightweight agent framework for it to put on GitHub. It was initially a side project that gained unexpected traction, eventually appearing on the front page of GitHub.
Unlike many other frameworks that existed before MCP and are now trying to retroactively integrate it, MCP Agent was built with MCP from the ground up. This allows for seamless functionality and decision-making centered around MCP.
The team came from "big tech" and builds what they would use in a large company. They are not trying to do everything for the user or lock them into proprietary systems (e.g., forcing them to rip out existing observability platforms, or transcribing workflows into proprietary languages). They aim to be a system that technologists and engineers would prefer to build themselves if given the choice. They believe their approach of offering a free, unencumbered system, with design principles mirroring those of a big company, is a sharp differentiator.
Conclusion
If you want production-ready agent patterns that speak MCP out of the box, mcp-agent is a great starting point. Explore the code, run an example, and compose your own agent. Start with the repository and the docs, then plug in MCP servers from your stack.
mcp-agent: Build composable AI agents on MCP