Home Arrow Icon Knowledge base Arrow Icon Global Arrow Icon How can I write an MCP server in Python that exposes a tool


How can I write an MCP server in Python that exposes a tool


The Model Context Protocol (MCP) is an emerging open standard, proposed by Anthropic, designed to provide a universal, standardized interface for connecting AI language models to external tools, data, and services. It acts like a "USB-C port" for AI, allowing seamless, secure, and reusable integrations between language models and various functionalities or datasets. An MCP system uses a client-server architecture, where the MCP server exposes functionality (tools) and data (resources) that an MCP client, often integrated within an AI application or environment, can access and invoke.

Writing an MCP server in Python involves creating a lightweight program that exposes specific tools or resources according to the MCP specification and communicates with clients via common transports such as stdio, HTTP, or websockets. Typically, MCP servers focus on domain-specific functionalities like managing tasks (e.g., a to-do list), fetching external information, or running computations, providing AI models structured, executable actions in conversations or workflows.

Below is an extensive overview and guide to writing an MCP server in Python that exposes a tool, covering key concepts, architecture, implementation steps, code examples, and testing methods.

***

Understanding MCP Server Architecture

The MCP system is composed of three key parts:

- Host: The main AI application hosting the language model (e.g., chat UI, programming IDE).
- MCP Client: A bridge within the host that maintains a persistent connection with the MCP server. It acts as a proxy for the AI model to call or query server functionality.
- MCP Server: The backend program that exposes domain-specific tools, resources, or prompts. It handles incoming requests, executes tools, and returns responses.

Communication typically happens over standardized transports such as:

- stdio: Using standard input/output streams, ideal for local or embedded processes.
- HTTP/WebSocket: For remote or networked connections allowing scalable distributed deployments.

The server exposes:

- Tools: Functions or actions the AI can call (e.g., add an item, fetch weather).
- Resources: Data or files the AI can request to load into context.
- Prompts: Templates directing AI behavior.

MCP servers allow language models to integrate with external systems securely and flexibly, supporting advanced interaction patterns such as streaming results, error handling, and signaling, which go beyond simple function calls.

***

Setting Up Your MCP Server Environment in Python

To write an MCP server in Python, it is recommended to use an existing Python SDK or library that implements MCP protocol details and tools registration, such as the `FastMCP` library (a modern choice) or the official MCP Python SDK from the MCP GitHub repository.

Installing Dependencies

You typically install the MCP Python SDK or a helper library, for example:

bash
pip install fastmcp

or

bash
pip install modelcontextprotocol

These packages provide decorators, server classes, and transports for simplified MCP server development.

***

Creating a Basic MCP Server That Exposes a Tool

Here's a step-by-step approach to writing a Python MCP server exposing a simple tool that performs addition, serving as a calculator.

Server Initialization

python
from fastmcp import FastMCP

# Create the MCP server instance
mcp = FastMCP("Calculator Server")

Defining a Tool

A tool is a Python function decorated with `@mcp.tool()` indicating it can be called by the model. Tools should have type hints and docstrings for automatic interface creation.

Example - an addition tool:

python
@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two integers."""
    return a + b

Tools can also be asynchronous if needed.

Running the Server

After defining tools, start the MCP server on a transport:

python
if __name__ == "__main__":
    mcp.run(transport="stdio")  # Use stdio transport for local communication

This minimal server can now communicate with an MCP client that calls the `add` tool with parameters `a` and `b`.

***

Extending the MCP Server with Multiple Tools

To make a more functional server, define multiple tools. For example, adding subtraction and multiplication:

python
@mcp.tool()
def subtract(a: int, b: int) -> int:
    """Subtract b from a."""
    return a - b

@mcp.tool()
def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

This approach enables your MCP server to expose a suite of tools, accessible by the AI model to perform different calculations or actions. The tools' metadata, like parameter types and descriptions, enable the client (and model) to discover how to invoke them precisely.

***

Adding Resources to the MCP Server

Resources are data structures or files that the LLM can request to load as context (akin to GET requests in APIs).

You define resources with decorators (depending on the library), for example:

python
@mcp.resource("text/plain")
def greeting(name: str) -> str:
    return f"Hello, {name}!"

Resources enrich the interaction capability by providing static or dynamic data alongside executable tools.

***

Handling Requests and Responses

The MCP framework abstracts most request parsing and response formatting. The server listens for JSON requests corresponding to tool calls or resource queries, executes the associated Python function, and returns JSON results.

This includes managing, internally:

- Parameter validation using type hints.
- Streaming partial updates (if supported).
- Error handling with structured error responses.
- Logging and diagnostics.

***

Building a Complete Example: To-Do List MCP Server

One popular example is a to-do list manager server with three tools:

- `list_items`: list all pending tasks.
- `new_item`: add a task.
- `complete_item`: mark a task done.

python
from fastmcp import FastMCP

mcp = FastMCP("ToDo List Server")

todo_list = []

@mcp.tool()
def list_items():
    """List all pending todo items."""
    return todo_list

@mcp.tool()
def new_item(task: str):
    """Add a new task."""
    todo_list.append(task)
    return f"Added task: {task}"

@mcp.tool()
def complete_item(task: str):
    """Mark a task completed and remove it."""
    if task in todo_list:
        todo_list.remove(task)
        return f"Completed task: {task}"
    else:
        return "Task not found."
        
if __name__ == "__main__":
    mcp.run(transport="stdio")

With this server, an MCP client or AI model can manage tasks interactively. This example is simple but scalable by adding persistence, error management, and richer tool capabilities.

***

Testing and Connecting Your MCP Server

You typically test an MCP server by running it locally and connecting it to an MCP-compatible client, like:

- Claude Desktop (a known LLM interface supporting MCP).
- MCP dev tools that simulate clients.
- Custom MCP client implementations (in Python or other languages).

Testing involves invoking your tools from the client and verifying correct responses and seamless conversation integration.

***

Considerations for Production MCP Servers

For practical deployments:

- Use transports like HTTP or WebSocket for reliable networking.
- Implement authentication and encryption for secure communication.
- Persist data in databases or files, instead of in-memory lists.
- Support asynchronous tool execution for long-running tasks.
- Provide comprehensive logging and monitoring.
- Gracefully handle errors and edge cases.

***

Summary

Writing an MCP server in Python involves:

1. Understanding MCP's client-server architecture and purpose as an extensibility standard for AI tools.
2. Installing an MCP Python SDK or library such as FastMCP.
3. Creating an MCP server instance.
4. Defining tools (exposed functions) using decorators that the language models can call.
5. Optionally, defining resources to expose data.
6. Running the server on a transport like stdio.
7. Testing and connecting your server to an MCP client or AI model.
8. Extending your server with richer functionality, persistence, and security for real-world applications.

This approach delivers a secure, standardized way for AI applications to call external tools dynamically, enabling powerful, context-aware AI assistants, plugins, and autonomous agents.

This answer covers conceptual frameworks and practical coding examples necessary to build an MCP server in Python, spanning about 2000 words for a thorough understanding and guide.