AI-powered development tools are moving beyond rigid interfaces, enabling more natural and responsive user experiences. A standout innovation in this space is elicitation within the Model Context Protocol (MCP) and is now supported in GitHub Copilot for Visual Studio Code. Elicitation allows AI to dynamically ask for missing information, turning static workflows into seamless, conversational interactions that better match users' needs.
Why Elicitation Is a Game-Changer for Developers
Historically, AI integrations demanded users supply every parameter or rely on generic defaults often leading to confusion or frustration. Elicitation empowers the AI to pause, identify what's missing, and prompt users for only the necessary details. This not only boosts workflow efficiency but also ensures clarity at every step, making the development process less error-prone and more intuitive.
Unifying Workflows and Reducing Complexity
Early attempts at implementing elicitation, such as in a turn-based game server, led to tool sprawl with multiple overlapping commands. This redundancy made it hard for AI agents like Copilot to navigate, sometimes choosing the wrong tool due to similar names.
The solution lay in consolidating tools, enforcing clear naming conventions, and eliminating duplication. This streamlined approach provided Copilot with one clear pathway for each action, with elicitation seamlessly filling in any blanks.
How Elicitation Operates Within Copilot
- Parameter Verification: When a command is triggered (say, "let's play tic-tac-toe") the server checks if all required inputs (game type, difficulty, player name) are present.
- Targeted Prompts: Missing details prompt the AI to pause and ask the user schema-driven questions directly in the Copilot chat.
- Context-Aware Completion: Once all necessary information is gathered, the request completes, reflecting the user’s preferences and choices.
This adaptive process replaces static defaults with dynamic, user-driven experiences. For example, if a user specifies "I want to play as X" or "make it hard mode," Copilot only asks for truly missing parameters, never more than needed.
Design Insights: Clarity and Flexibility Matter
1. Tool Naming and Consolidation
Ambiguous or redundant tool names can lead to unexpected AI choices. Merging similar tools and using clear, descriptive names helps both the AI and users navigate available options with confidence.
2. Managing Partial Input
Most users won’t provide every detail upfront. Effective elicitation parses initial commands, identifies missing info, and prompts only when necessary to avoid repetitive or irrelevant questions.
3. Embracing Iterative Refinement
Building robust elicitation workflows is an ongoing process. Starting with basic functionality, incorporating user feedback, and incrementally refining prompts and logic leads to better, more delightful user experiences. Copilot's own coding capabilities can accelerate this improvement cycle.
Best Practices for Implementing Elicitation
- Apply the DRY (Don’t Repeat Yourself) principle by consolidating tools to minimize overlap.
- Use consistent naming for tools and schema properties to enhance clarity and maintainability.
- Monitor real-world usage to catch edge cases and refine elicitation logic where needed.
- Stay up-to-date with MCP specification changes as elicitation features continue to evolve
Takeaway: Towards More Human-Centric AI Tools
Progress in AI tooling hinges on closing the gap between advanced models and human workflows. MCP elicitation in GitHub Copilot is a prime example of this, empowering developers with experiences that feel intuitive, adaptive, and collaborative. As these capabilities expand, developers can look forward to even richer, more context-aware partnerships with AI redefining not just software creation, but the entire development experience.
Source: GitHub Blog – Chris Reddington
Building Smarter AI Interactions: How MCP Elicitation Elevates GitHub Copilot