Skip to Content

Turning Agents Into Sharable Software: Inside Docker’s cagent

cagent Makes AI Agents Portable with MCP, DMR, and OCI packaging
Docker

A small utility that feels like a platform: Docker's cagent turns AI agents into sharable, runnable software.

Teams often build a clever assistant or a small cluster of cooperating agents. Then real-world needs hit: ship it, run it on different machines, connect it to services, keep credentials safe, and help colleagues reproduce the setup.

docker/cagent takes and end to end approach. You describe agents in simple YAML, run them locally with a clean CLI and TUI, connect them to tools via the Model Context Protocol (MCP), and package or pull them as OCI artifacts so the experience travels intact.

Key takeaways

  • Multi-agent by design: Define a root coordinator and specialized sub-agents with clear instructions and roles.

  • Rich tool ecosystem via MCP: Connect containerized or remote MCP servers, or enable built-in tools like filesystem, memory, think, and todo.

  • Provider flexibility: Use OpenAI, Anthropic, Gemini, or local models via Docker Model Runner, with options to tune context limits and runtime flags.

  • Run, package, and share: Launch with a CLI/TUI, expose an API server, and push or pull agents as OCI artifacts.

  • Observability & safety: Real-time streaming, optional OpenTelemetry, and explicit tool-call approval workflows.

  • Clean telemetry controls: Anonymous, documented, and easily disabled when needed.

  • YAML-first ergonomics: Clear, declarative configuration with sensible defaults.

  • Great examples: A practical gallery of basic, advanced, and multi-agent setups.

  • Docker-native distribution: Agents can be shipped as OCI artifacts for reproducible runs.

  • TUI and non-interactive modes: Pick the interaction model that fits the task.

  • Remote runtime option: Defer agent loading to a server and stream results.

From one-off scripts to portable agent systems

Agent demos are easy; dependable workflows are hard. Most teams struggle to make agent configurations repeatable across laptops and CI, isolate risky tools, and scale from a single helper into a small, coordinated team. 

cagent frames agent design as configuration, gives those agents standardized, sandboxed tools, and then treats the result like software you can push and pull. Under the hood it speaks to multiple model providers, streams events in real time, and embraces a hierarchical agent model so coordination is first class rather than an afterthought.

Why it clicked for me

Two things stand out. First, the ergonomics: running an agent is literally cagent run ./examples/pirate.yaml. You can flip to a TUI, pipe in stdin, or run headless without changing the config. 

Second, the portability story: the same YAML can be pushed to Docker Hub, pulled somewhere else, and run as-is. That closes the loop from prototype to shareable artifact. The repo's README.md is unusually practical, and the examples folder makes it easy to learn by doing.

agents:
  root:
    model: openai/gpt-5-mini
    description: A helpful AI assistant
    instruction: |
      You are a knowledgeable assistant that helps users with various tasks.
      Be helpful, accurate, and concise.
    toolsets:
      - type: mcp
        ref: docker:duckduckgo
models:
  openai:
    provider: openai
    model: gpt-5-mini
 

Under the hood: CLI flow, tools, server, and models

The command path in cmd/root/run.go wires the experience. It loads agent YAML (or fetches it from an OCI image), starts the runtime, and streams events to the terminal. 

It supports a TUI (Bubble Tea) for interactive runs, stdin piping for non-interactive sessions, and a remote runtime mode that defers agent loading to a server. You can toggle auto-approvals for tool calls or keep a human-in-the-loop confirmation step. 

For packaging and distribution, cagent push and cagent pull wrap the OCI workflow so agent configs travel as standard artifacts.

Tooling is pluggable. The pkg/tools layer exposes built-in helpers (filesystem, think, todo, memory) and integrates with MCP. 

For containerized tools, cagent talks to the Docker MCP Gateway so an agent can say ref: docker:duckduckgo and gain safe, sandboxed web search in seconds. 

For remote or SSE-based MCP servers, configuration is explicit and supports headers and OAuth-like flows. The server side in pkg/server/server.go handles HTTP endpoints and streaming, while telemetry lives under pkg/telemetry with a context-first API so you can attach metrics to any run.

Model abstraction sits in pkg/model with provider-specific clients. Out of the box you can target OpenAI, Anthropic, and Google, or run local models through Docker Model Runner (DMR). DMR is exposed as an OpenAI-compatible endpoint and can be tuned via provider_opts runtime flags -- handy for llama.cpp backends and context-size control (Docker Docs, 2025). Adding a new provider follows the pattern in docs/PROVIDERS.md and pkg/model/provider/provider.go.

Use cases that show range

The examples directory is a great tour. You will find creative assistants (haiku, pirate), diagnostic helpers that parse logs with built-in tools, research agents that pair a model with web search and a small memory store, and multi-agent teams that split responsibilities across developer, reviewer, and tester personas.

Because MCP tools run in containers, you can confidently wire access to GitHub, web scraping, databases, or even browser automation without contaminating the host. In practice, this pattern works well for documentation pipelines, small internal copilots, and time-boxed research tasks that need reproducible setups.

Community pulse and how to contribute

Issues and PRs show an active focus on developer experience and security. For example, there are requests to improve TUI handling for OAuth prompts and to support self-contained MCP server images without catalog indirection, plus provider refinements and history controls. See the project's open issues list for a sense of what is coming next: Issues. Contribution guidance lives in docs/CONTRIBUTING.md, and the repo includes a Taskfile.yml to standardize builds and linting.

Usage notes, telemetry, and license

Usage details are in docs/USAGE.md, including CLI shortcuts like /reset, /compact, and /eval. Telemetry is opt-out via an environment variable and documents what is, and is not, collected in docs/TELEMETRY.md. The project is licensed under the Apache License 2.0, which permits commercial use, modification, distribution, and patent grants, provided you keep notices and include a copy of the license; see LICENSE.

Impact and what comes next

By fusing agent configuration with container-native packaging, cagent raises the bar for reproducibility. It also normalizes tool access through MCP. By leaning on Docker's MCP Toolkit and Catalog you get isolation, verified images, and one-click setup on the desktop.

Looking ahead, two areas seem promising: richer remote runtimes and tighter authorization flows for remote MCP servers aligned with the June 2025 MCP Authorization spec (MCP, 2025). That should make enterprise integrations smoother while keeping the simple, YAML-first ergonomics intact.

About Docker

Docker builds tools that simplify how developers create, share, and run software. Millions use Docker Desktop and Docker Hub daily. The company's newer AI features -- like Docker Model Runner and the MCP Toolkit -- extend that mission to agentic and model-centric workflows.

Docker emphasizes developer focus, open collaboration, and outcome-driven execution. Products span Desktop, Hub, Scout, Build Cloud, and more. See Docker: Company for an overview.

Quick takeaway

If you are building agents that need to travel across machines, teams, or environments, cagent offers a clean, practical path from a single YAML to a portable artifact you can pull anywhere. 

The built-in TUI makes it feel friendly. The MCP gateway keeps tools safe. And the provider abstraction lets you pick the right model for the job, including local options via DMR. Start with the examples, then peek at cmd/root/run.go and pkg/tools to understand the flow. When it is time to share, try cagent push. It is a short hop from a helpful agent to a reproducible capability your whole team can use.


Authors:
Docker
Turning Agents Into Sharable Software: Inside Docker’s cagent
Joshua Berkowitz September 23, 2025
Views 3366
Share this post