Skip to Content

Harnessing the Power of Tools in Gemini API Models In Place of MCP Servers

Gemini Has Yet to Support Model Context Protocol

Get All The Latest Research & News!

Thanks for registering!

The Gemini API offers robust capabilities for developers to connect large language models to external systems and data sources through the use of "tools." This allows for the creation of dynamic, interactive applications that can perform actions, retrieve real-time information, and interact with your own code. 

The primary and officially supported method for this is function calling, while emerging patterns like the Model Context Protocol (MCP) offer alternative, community-driven approaches.

Function Calling: The Core of Tool Use in Gemini

Function calling allows you to define custom functions in your application code and provide their descriptions to a Gemini model. 

The model can then intelligently decide to "call" these functions and will return a structured JSON object containing the function name and the arguments it believes are necessary to execute it. 

Your application code is then responsible for actually running the function and sending the result back to the model to continue the conversation.

Enabling "thinking" can improve function call performance by allowing the model to reason through a request before suggesting function calls.

However, because the Gemini API is stateless, this reasoning context is lost between turns, which can reduce the quality of function calls as they require multiple turn requests.

This mechanism enables a wide range of possibilities, from querying a database for the latest sales figures to controlling smart home devices or fetching real-time weather data. See this Google page for function tool definitions.

How Function Calling Works: A Step-by-Step Guide
  1. Define Your Tools: In your code (e.g., Python), you define the functions that you want the Gemini model to be able to use. Each function should have a clear purpose and well-defined parameters.

  2. Provide Tool Specifications to the Model: You create a JSON schema that describes your functions, including their names, a description of what they do, and the parameters they accept. This schema is passed to the Gemini model as part of your request.

  3. The Model's Turn: When you send a prompt to the model, it analyzes the user's request and your provided function specifications. If it determines that one of your functions can help fulfill the request, instead of generating a text response, it will output a FunctionCall object. This object contains the name of the function to call and the arguments to use.

  4. Execute the Function: Your application code receives this FunctionCall object. It's then your responsibility to execute the corresponding function in your code with the provided arguments.

  5. Return the Result: After your function has executed, you send the return value of the function back to the Gemini model. This allows the model to use the function's output to generate a final, informed response to the user.

Python Example: A Simple Calculator Tool

Here’s a basic example of how to implement a meeting scheduler using the Gemini API in Python:

from google import genai
from google.genai import types

# Define the function declaration for the model
schedule_meeting_function = {
    "name": "schedule_meeting",
    "description": "Schedules a meeting with specified attendees at a given time and date.",
    "parameters": {
        "type": "object",
        "properties": {
            "attendees": {
                "type": "array",
                "items": {"type": "string"},
                "description": "List of people attending the meeting.",
            },
            "date": {
                "type": "string",
                "description": "Date of the meeting (e.g., '2024-07-29')",
            },
            "time": {
                "type": "string",
                "description": "Time of the meeting (e.g., '15:00')",
            },
            "topic": {
                "type": "string",
                "description": "The subject or topic of the meeting.",
            },
        },
        "required": ["attendees", "date", "time", "topic"],
    },
}

# Configure the client and tools
client = genai.Client()
tools = types.Tool(function_declarations=[schedule_meeting_function])
config = types.GenerateContentConfig(tools=[tools])

# Send request with function declarations
response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents="Schedule a meeting with Bob and Alice for 03/14/2025 at 10:00 AM about the Q3 planning.",
    config=config,
)

# Check for a function call
if response.candidates[0].content.parts[0].function_call:
    function_call = response.candidates[0].content.parts[0].function_call
    print(f"Function to call: {function_call.name}")
    print(f"Arguments: {function_call.args}")
    #  In a real app, you would call your function here:
    #  result = schedule_meeting(**function_call.args)
else:
    print("No function call found in the response.")
    print(response.text)  

In a more complete application, you would inspect the response for a FunctionCall and then execute the corresponding Python function.

Model Context Protocol (MCP): An Emerging Standard

The Model Context Protocol (MCP) is an open-source specification designed to standardize communication between large language models and other components of an AI system, such as other models, tools, and data stores. Think of it as a set of rules for how these different parts can have a structured and context-aware conversation.

Key characteristics of MCP include:
  • Persistent Context: MCP aims to maintain a consistent context across multiple interactions, allowing a model to "remember" previous exchanges and tool uses.

  • Structured Communication: It defines a clear format for messages, ensuring that information is passed reliably between different parts of the system.

  • Inter-Agent Communication: MCP can facilitate communication not just between a model and its tools, but also between different AI agents, enabling more complex, collaborative workflows.

MCP and Gemini

MCP is not a native, built-in feature of the Gemini API in the same way function calling is yet, developers are eagerly awaiting the release of MCP integrated models from Google. Function calling offers a way to interact with the outside world of apps and APIs but does not offer the contextual relevance and continuity provided by the MCP standard. 

There are open-source projects and community-driven efforts to create clients and servers that implement MCP and can integrate with various LLMs, including Gemini. This approach offers a higher degree of flexibility and control for building sophisticated, multi-component AI applications but lacks the official support which means less stable results.

Key Differences 

Feature

Function Calling (Official Gemini API)

Model Context Protocol (MCP)

Integration

Native, officially supported feature of the Gemini API.

An open standard that requires a separate implementation (server/client).

Complexity

Relatively straightforward to implement for basic tool use.

Can be more complex to set up but offers greater flexibility for complex systems.

Focus

Primarily focused on the interaction between a single model and a set of tools.

Designed for broader communication between multiple models, tools, and agents.

Use Case

Ideal for adding specific functionalities to a Gemini-powered application.

Suited for building complex, multi-agent systems and applications requiring persistent context across various components.


For most developers looking to integrate external tools and data with their Gemini applications, the official function calling feature is the recommended and most direct approach. 

For those building more advanced, multi-agent systems or who require the specific architectural benefits of a standardized communication protocol, exploring the Model Context Protocol as a complementary technology can be a powerful option but it won't be fully realized until the Gemini models natively support the protocol. 


Harnessing the Power of Tools in Gemini API Models In Place of MCP Servers
Joshua Berkowitz July 10, 2025
Share this post
Tags