The speed of innovation in AI is staggering, but many teams are finding themselves trapped by vendor-specific solutions. With Docker's open-source cagent and the flexibility of GitHub Models, you can finally orchestrate advanced AI agents, without tying your fate to a single provider.
This dynamic combo enables you to design, deploy, and share robust multi-model AI workflows, all with a single configuration. Helping to future-proof and offer unparalleled adaptability for your latest AI initiatives!
What Makes Docker cagent Stand Out?
cagent is a multi-agent runtime that simplifies AI agent management through a declarative YAML file. Gone are the days of juggling Python environments or patching together multiple SDKs. Instead, you define agent logic, models, toolsets, and delegation in one place. Its key features include:
- Declarative YAML setup: Centralize agent logic, model selection, and tools in a single file.
- Multi-provider model support: Seamlessly access models from OpenAI, Anthropic, Google Gemini, or run them locally with Docker Model Runner.
- MCP integration: Connect agents to external tools and data sources via the Model Context Protocol.
- Secure sharing: Package and distribute agents through Docker Hub, just like any container.
- Advanced reasoning: Use built-in "think," "todo," and "memory" features for complex, coordinated workflows.
With cagent, each agent runs in isolation while easily delegating tasks to sub-agents, making it possible to mirror human-like collaboration for intricate projects.
Below is an example of a configuration file for a two agent team that debugs problems provided by Docker.
agents:
root:
model: openai/gpt-5-mini # Change to the model that you want to use
description: Bug investigator
instruction: |
Analyze error messages, stack traces, and code to find bug root causes.
Explain what's wrong and why it's happening.
Delegate fix implementation to the fixer agent.
sub_agents: [fixer]
toolsets:
- type: filesystem
- type: mcp
ref: docker:duckduckgo
fixer:
model: anthropic/claude-sonnet-4-5 # Change to the model that you want to use
description: Fix implementer
instruction: |
Write fixes for bugs diagnosed by the investigator.
Make minimal, targeted changes and add tests to prevent regression.
toolsets:
- type: filesystem
- type: shell Introducing GitHub Models
GitHub Models is your gateway to production-level language models from industry leaders such as OpenAI, Meta, Microsoft, and DeepSeek. One GitHub Personal Access Token grants access to a wide range of models, all under GitHub's trusted infrastructure. Notable benefits include:
- Unified access: Use top-tier models with a single authentication token.
- Easy switching: Swap between language models as requirements evolve.
- Built-in safety: Leverage content filters and advanced controls.
- Ready for production: Infrastructure tailored for real-world, agentic AI workflows.
The GitHub Marketplace ensures you always have access to the latest and most powerful models, keeping your AI stack current and competitive.
Configuring cagent with GitHub Models
Setting up cagent with GitHub Models is refreshingly simple. GitHub Models uses an OpenAI-compatible API, allowing you to update only the base URL and authenticate using your GitHub token. Here's how to get started:
Quick Setup Steps
- Prepare your environment: Install Docker Desktop (v4.49+), enable the MCP Toolkit, create a GitHub Personal Access Token (with "models" scope), and download the cagent binary.
- Define your agent: Draft a YAML file outlining the agent's workflow, models, and toolsets. The example in the source blog demonstrates a multi-agent podcast generator leveraging GitHub Models and DuckDuckGo for research.
- Run locally: Execute your agent with
.cagent run ./your_agent.yaml- Share and deploy: Package your agent as a Docker image with
cagent pushand distribute via Docker Hub.- Collaborate effortlessly: Teammates can pull and run your agent with a single command, ensuring reproducibility and simplicity.
This method eliminates dependency headaches and gives you true portability and shareability across teams and platforms.
Case Study: Podcast Generator Agent
A practical example highlighted by Docker is a podcast generation workflow. The primary agent, acting as a Podcast Director, delegates research to a "researcher" sub-agent and scriptwriting to a "scriptwriter" sub-agent. External tools like DuckDuckGo are integrated via MCP, showcasing how cagent and GitHub Models together enable seamless orchestration and collaboration across specialized tasks,all managed from a single YAML file.
Deployment and sharing remain frictionless, leveraging Docker's familiar tools for packaging and distribution.
#!/usr/bin/env cagent run
agents:
root:
description: "Podcast Director - Orchestrates the entire podcast creation workflow and generates text file"
instruction: |
You are the Podcast Director responsible for coordinating the entire podcast creation process.
Your workflow:
1. Analyze input requirements (topic, length, style, target audience)
2. Delegate research to the research agent which can open duck duck go browser for researching
3. Pass the researched information to the scriptwriter for script creation
4. Output is generated as a text file which can be saved to file or printed out
5. Ensure quality control throughout the process
Always maintain a professional, engaging tone and ensure the final podcast meets broadcast standards.
model: github-model
toolsets:
- type: mcp
command: docker
args: ["mcp", "gateway", "run", "--servers=duckduckgo"]
sub_agents: ["researcher", "scriptwriter"]
researcher:
model: github-model
description: "Podcast Researcher - Gathers comprehensive information for podcast content"
instruction: |
You are an expert podcast researcher who gathers comprehensive, accurate, and engaging information.
Your responsibilities:
- Research the given topic thoroughly using web search
- Find current news, trends, and expert opinions
- Gather supporting statistics, quotes, and examples
- Identify interesting angles and story hooks
- Create detailed research briefs with sources
- Fact-check information for accuracy
Always provide well-sourced, current, and engaging research that will make for compelling podcast content.
toolsets:
- type: mcp
command: docker
args: ["mcp", "gateway", "run", "--servers=duckduckgo"]
scriptwriter:
model: github-model
description: "Podcast Scriptwriter - Creates engaging, professional podcast scripts"
instruction: |
You are a professional podcast scriptwriter who creates compelling, conversational content.
Your expertise:
- Transform research into engaging conversational scripts
- Create natural dialogue and smooth transitions
- Add hooks, sound bite moments, and calls-to-action
- Structure content with clear intro, body, and outro
- Include timing cues and production notes
- Adapt tone for target audience and podcast style
- Create multiple format options (interview, solo, panel discussion)
Write scripts that sound natural when spoken and keep listeners engaged throughout.
toolsets:
- type: mcp
command: docker
args: ["mcp", "gateway", "run", "--servers=filesystem"]
models:
github-model:
provider: openai
model: openai/gpt-5
base_url: https://models.github.ai/inference
env:
OPENAI_API_KEY: ${GITHUB_TOKEN} Build AI Without Boundaries
Traditional AI workflows often come with the cost of vendor lock-in, scattered API keys, and fragmented toolchains. By combining Docker cagent with GitHub Models, you achieve true vendor independence and unlock the freedom to select the best models for any job, while managing everything with a unified workflow and a single token. Your AI projects become more flexible, cost-efficient, and future-ready, no matter how quickly the landscape shifts.
Your AI, Your Rules
Thanks for reading! The freedom to mix and match AI models and tools, without being tied to a single vendor, is a game-changer for teams who want to stay agile. But designing these systems well requires more than just technical know-how. It demands experience across diverse projects, industries, and technology stacks. Over my 20+ year career, I have helped businesses of all sizes architect intelligent automation and custom software that genuinely moves the needle, whether that means cutting operational costs, eliminating manual bottlenecks, or unlocking entirely new capabilities.
Curious how to put vendor-independent AI to work for your organization? Whether you are just getting started with agent orchestration or ready to scale an existing system, I would be happy to share what I have learned. Reach out to explore how my automation and software development expertise can help you build AI solutions on your terms. Schedule a free consultation and let's discuss your next steps.
![]()

Unlock Vendor Independence in AI with Docker cagent and GitHub Models