sreekutty
Devops

Model Context Protocol (MCP) is an open protocol that standardises how AI applications provide context to large language models. Before MCP existed, every AI integration was custom-built. If you wanted Claude to read your Notion pages, you had to write a bespoke connector. If you then wanted it to also query your PostgreSQL database, you wrote another one from scratch, with a completely different interface. MCP replaces this fragmentation with a single, consistent communication standard. Once a tool or data source exposes an MCP server, any MCP-compatible AI host can connect to it immediately — no custom integration work required. The analogy Anthropic often uses is that MCP is to AI tools what HTTP is to the web. Just as HTTP standardised how browsers and servers communicate, MCP standardises how AI models and external systems communicate.
MCP has three main moving parts: hosts, clients, and servers. Understanding the relationship between these three is the key to understanding the entire protocol.
Hosts A host is the AI application that a user actually interacts with — Claude Desktop, Cursor, VS Code with a Copilot extension, or your own custom AI application. The host is responsible for managing the overall AI session, holding the LLM conversation, and orchestrating one or more MCP clients.
Clients An MCP client lives inside the host. It is a lightweight component whose sole job is to maintain a one-to-one connection with a single MCP server. One host can run multiple clients simultaneously, which means it can connect to multiple servers at the same time.
Servers An MCP server is a lightweight program that wraps an external tool or data source and exposes it to the AI through the MCP protocol. You can have an MCP server for GitHub, one for your local file system, one for Slack, one for a PostgreSQL database — and the AI host can talk to all of them through its pool of clients. Each MCP server exposes up to three categories of capabilities: Tools are executable functions the AI can call. Reading a file, creating a GitHub issue, sending an API request — these are all tools. The AI decides when and how to invoke them based on the task at hand. Resources are read-only data sources that can be loaded into the AI's context window. Think of a resource as a file, a database record, or a live document that the AI can read but not modify directly. Prompts are pre-built prompt templates that a server can offer. They are reusable, parameterised instructions that users or applications can invoke by name, ensuring consistency across interactions.
MCP uses a client-server architecture built on top of JSON-RPC 2.0 for message transport. JSON-RPC is a lightweight remote procedure call protocol that encodes requests and responses as simple JSON objects. It was chosen for MCP because it is language-agnostic, easy to implement, and well-understood across the developer community.
Communication between a client and server can happen over two transports:
Standard input/output (stdio) is used for local servers that run as a subprocess on the same machine as the host. The client spawns the server process and communicates with it through stdin and stdout. This is the most common pattern for local tools like a file-system server or a locally running database.
HTTP with Server-Sent Events (SSE) is used for remote servers deployed on the internet. The client sends requests over HTTP, and the server can stream responses back using SSE — which is particularly useful for long-running operations.
The lifecycle of a single MCP interaction looks like this. When the host application starts, each client connects to its assigned server and performs an initialisation handshake — exchanging protocol versions and capability declarations. The server tells the client what tools, resources, and prompts it exposes. Later, when the LLM decides it needs to use a tool (for example, to read a file), the host sends a tools/call request through the appropriate client. The server executes the operation and returns the result as a structured JSON response. The host then injects that result back into the LLM's context so it can continue the conversation.
Here is a minimal example of what a tools/call request looks like over the wire:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "read_file",
"arguments": {
"path": "/home/user/project/main.py"
}
}
}
And the server's response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"content": [
{
"type": "text",
"text": "def main():\n print('Hello, world!')\n"
}
]
}
}
The model never touches the file system directly. The MCP server is the intermediary — it receives a structured request, performs the operation, and hands the result back to the model in a format it understands.
pip install mcp
Step 2: Create the Server File Create a file named file_reader_server.py and paste in the following:
from mcp.server import FastMCP
mcp = FastMCP("file-reader") @mcp.tool() def read_file(path: str) -> str: """Read the contents of a text file at the given path.""" try: with open(path, "r") as f: return f.read() except FileNotFoundError: return f"Error: File not found at {path}" except Exception as e: return f"Error reading file: {str(e)}" if name == "main": mcp.run()
That is the entire server. The @mcp.tool() decorator automatically registers the function as an MCP tool, generates its JSON schema from the Python type hints, and makes it discoverable to any connecting client.
Open Claude Desktop's configuration file. On macOS, it lives at:
~/Library/Application Support/Claude/claude_desktop_config.json
Add your server to the mcpServers section:
{ "mcpServers": { "file-reader": { "command": "python", "args": ["/absolute/path/to/file_reader_server.py"] } } }
Restart Claude Desktop. In the bottom-left of the chat interface you will see a tools icon indicating the server is connected. You can now ask Claude to read any file on your machine and it will use your tool to do it.
One of the most powerful aspects of MCP is that you do not always have to build your own servers. Anthropic and the open-source community have already built a rich library of pre-made servers that you can drop into any MCP-compatible host. Some of the most popular ones include servers for GitHub (create issues, read pull requests, browse repositories), PostgreSQL and SQLite (run queries, inspect schemas), the local file system (read, write, and navigate files), Brave Search (perform live web searches), Slack (read messages, post to channels), and Google Drive (browse and read documents). Beyond pre-built servers, several major developer tools have already embedded native MCP support. Cursor, the AI-powered code editor, uses MCP to give its assistant access to codebases and terminals. Zed, Sourcegraph, and a growing list of enterprise tools have announced or shipped MCP integrations. The momentum behind the standard has grown rapidly since its release, and it is quickly becoming the default interoperability layer for agentic AI systems.
Before MCP, building an AI agent that could interact with multiple external systems meant writing and maintaining a custom integration for each one. As the number of tools grew, so did the maintenance burden. Every new model you wanted to support required re-implementing the same integrations. Every new tool required updating every agent that needed to use it. MCP decouples this relationship entirely. Server developers write an MCP server once, and it works with every MCP host. Host developers build an MCP client once, and it works with every MCP server. The combinatorial explosion of N-tools × M-models collapses into N + M implementations. For developers building agentic systems, this is a fundamental shift. It means you can compose powerful, multi-tool agents by connecting existing MCP servers rather than writing integration glue from scratch. It means your agents remain portable across AI models. And it means the community's work is additive rather than duplicated — every new MCP server built by anyone is immediately usable by everyone.
Share this article
Loading comments...
© 2026 CloudHouse Technologies Pvt.Ltd. All rights reserved.