For years, Docker Compose has been a staple for developers wanting to spin up multi-container environments locally. Now, with the addition of provider services in Docker Compose v2.36.0 the evolution continues.
You can orchestrate not only containers, but external resources like databases, VPNs, SaaS APIs, or even AI services directly from your compose.yaml
file. This means your workflow becomes more streamlined, and setup headaches fade away.
Why 'Providers' Are a Serious Upgrade
Modern applications rarely live in isolation; they depend on external services. Before provider services, developers had to juggle Compose alongside scripts or custom tools to manage resources outside of Docker’s ecosystem. This fragmented workflow led to complexity, slower onboarding, and more maintenance overhead.
With provider services, you can treat any external dependency as a first-class citizen, defined and managed right alongside your containers. No more patchwork solutions everything is unified within Compose.
Getting Started with Provider Services
Adding a provider service is simple. Instead of an image
, you specify a provider
with a type
(the name of an executable in your system’s $PATH
) and any needed options
.
For example, using the Telepresence provider plugin, you can redirect Kubernetes traffic to a local service. When you run docker compose up
, Compose will handle installing, configuring, and tearing down the integration automatically.
Image Credit: Docker
- Up Action: Installs and configures the provider, setting up resources like traffic intercepts or cloud tunnels.
- Down Action: Cleans up by removing integrations and ending sessions.
Each plugin offers custom configuration, and the Compose Language Server helps you set these up with inline suggestions.
How Provider Services Work Under the Hood
When Compose encounters a provider
key, it looks for an executable matching the type
you’ve defined.
This binary receives configuration details as command-line flags and communicates with Compose via JSON over standard input and output. This structured protocol supports:
- info: Status updates for the Compose log
- error: Error reporting
- setenv: Supplying environment variables to dependent services
- debug: Detailed output for troubleshooting
This approach allows seamless integration with anything from managed databases to AI inference engines. For developers interested in the technical details, the official protocol specification is available for reference.
Creating Your Own Provider Plugin
The extensibility of provider services is a major strength. Anyone can build a provider in any language, as long as it follows the protocol. At a minimum, a provider plugin should:
- Handle
up
anddown
commands - Accept options as command-line flags
- Communicate using structured JSON
- Support debug output for troubleshooting
The compose-telepresence-plugin is an example in Go, bridging local containers with remote Kubernetes services.
To build your own plugin:
- Read the protocol specification.
- Parse CLI flags for configuration.
- Implement JSON-based request/response handling.
- Add robust debug output.
- Place the binary in your
$PATH
.- Reference it in
compose.yaml
as a provider serviceSee the full spec for more detailed implementations
The Future of Provider Services
Provider services are just beginning to showcase their potential. As the community contributes feedback and new plugins, Docker Compose will evolve into a unified command center for orchestrating containers, cloud resources, tunnels, and even AI runtimes.
The days of cobbling together scripts and tools are ending, provider services offer a declarative, native, and extensible approach for managing every part of your dev stack.
Developers are encouraged to experiment, build, and share new providers, helping shape the future of platform-aware development. With Compose as your control center, the possibilities are wide open.
Docker Compose Provider Services Are Streamlining Development