Modern development demands more than cookie-cutter mock data. Test environments need to simulate real-world conditions, and static responses often fall short. By combining Docker Model Runner and Microcks, teams can now leverage local AI models to generate dynamic, lifelike mock APIs, taking the realism and flexibility of testing to new heights.
Harnessing the Power of LLMs, Docker, and Microcks
Large Language Models (LLMs) excel at producing diverse, non-deterministic data, making them ideal for simulating complex application scenarios. Microcks, an open-source CNCF project, streamlines the deployment of mock services based on OpenAPI schemas, ensuring safe and isolated testing.
Docker Model Runner simplifies running LLMs locally, offering an OpenAI-compatible API that capitalizes on local hardware for AI workloads.
The integration of these tools empowers teams to create mock API responses that mirror real-world data, improving the detection of hard-to-find bugs and enhancing user experience during development and QA cycles.
Getting Started: Step-by-Step Integration
1. Setting Up Docker Model Runner
- Enable Docker Model Runner within Docker Desktop.
- Pull your preferred LLM image (e.g., ai/qwen3:8B-Q4_0) with:
docker model pull ai/qwen3:8B-Q4_0
2. Connecting Microcks to Your Local AI
- Clone the Microcks repository and access its Docker Compose configuration.
- Edit
application.properties
to configure the AI Copilot feature, directing it to the Docker Model Runner endpoint atmodel-runner.docker.internal:80
.- Ensure AI Copilot is activated in
features.properties
.This setup allows Microcks to request on-demand, AI-generated samples from your local LLM, eliminating external dependencies and boosting security and performance.
3. Launching Microcks with AI Copilot
- Start Microcks in development mode using
docker compose
- Access the UI at http://localhost:8080 and install a sample API (such as the Pastry API) for instant experimentation.
Generating Realistic Mock Data on Demand
After deploying an API, navigate to its service page in Microcks. Choose an operation and open the AI Copilot Samples dialog. Microcks will communicate with your local LLM (via Docker Model Runner) to craft realistic example responses tailored to your API schema and the current request.
These AI-powered samples can be reviewed, edited, and used immediately as mock endpoints. For example, a curl
call to your mock API now yields unique, context-aware JSON responses, like a pastry object with dynamic status and description fields that align with the request parameters.
Since the LLM generates new data each run, you get rich test coverage and the ability to validate edge cases and business logic more thoroughly.
Ensuring Reproducibility and Team Consistency
To maintain consistency across environments, specify the Docker Model Runner and AI model in your compose.yml
. This ensures all team members and CI processes use the same configuration, supporting reliable, repeatable test results and smoother collaboration.
The Takeaway: Smarter, More Authentic Testing
Docker Model Runner and Microcks together represent a leap forward in test automation. By generating synthetic yet authentic API responses with local LLMs, development teams can expand test coverage, identify subtle bugs, and deliver higher quality software. This AI-driven approach blends flexibility, realism, and security, all within a local-first workflow tailored to modern development needs.
Ready to explore or share your insights? Join the Docker Forum community and contribute to the evolution of local AI-powered testing.
Source: docker.com/blog/ai-powered-mock-apis-for-testing-with-docker-and-microcks/
AI-Powered Mock APIs: Supercharging Testing with Docker and Microcks