Until recently, every AI framework solved the "how does a model call an external tool?" problem differently. LangChain had its tool format. The OpenAI Agents SDK had another. Claude had its own. If you wanted your file-system browser to work with three different agents, you wrote three different integrations.
Anthropic's Model Context Protocol (MCP) — released in late 2024 and now adopted by OpenAI, Google DeepMind, and dozens of tool vendors — is a single open standard that any model host and any tool server can speak. Write a tool once; connect it to every compatible agent.
MCP has three roles:
| Role | What it does | Examples |
|---|---|---|
| Host | The AI application that the user interacts with | Claude Desktop, GitHub Copilot, VS Code |
| Client | Lives inside the host; manages the connection to one MCP server | Built into the host — you don't write this |
| Server | Exposes tools, resources, and prompts over MCP | Your custom server, or an open-source one |
The host and client are typically the same product (e.g. Claude Desktop). You only need to build the server.
Communication between client and server happens over stdio (for local servers) or HTTP with Server-Sent Events (for remote servers). The protocol is JSON-RPC 2.0 under the hood, but the SDK hides all of that.
npm install @modelcontextprotocol/sdk# TypeScript support
npm install -D typescript @types/node ts-nodeThis server exposes one tool: get_current_time, which returns the current UTC time for a given timezone. Simple, but enough to see the full MCP lifecycle.
// server.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
const server = new Server(
{ name: "time-server", version: "1.0.0" },
{ capabilities: { tools: {} } },
);
// Declare what tools are available
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "get_current_time",
description:
"Get the current UTC time, optionally formatted for a timezone.",
inputSchema: {
type: "object",
properties: {
timezone: {
type: "string",
description:
"IANA timezone string, e.g. 'Europe/London'. Defaults to UTC.",
},
},
required: [],
},
},
],
}));
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name !== "get_current_time") {
throw new Error(`Unknown tool: ${request.params.name}`);
}
const tz = (request.params.arguments?.timezone as string) ?? "UTC";
const now = new Date().toLocaleString("en-GB", {
timeZone: tz,
dateStyle: "full",
timeStyle: "long",
});
return {
content: [{ type: "text", text: `Current time in ${tz}: ${now}` }],
};
});
// Start listening on stdio
const transport = new StdioServerTransport();
await server.connect(transport);That's the entire server — about 50 lines. The SDK handles the JSON-RPC handshake, capability negotiation, and schema validation.
npx tsc server.ts --outDir distclaude_desktop_config.json:{
"mcpServers": {
"time-server": {
"command": "node",
"args": ["/absolute/path/to/dist/server.js"]
}
}
}You'll see a hammer icon in the chat input. Ask Claude "What time is it in Tokyo?" and it will call your get_current_time tool automatically.
MCP supports three capability types, not just tools:
Tools — functions the model can call (what we built above).
Resources — data sources the model can read, like files or database rows. Useful for giving the model access to documents without stuffing them into the system prompt:
server.setRequestHandler(ListResourcesRequestSchema, async () => ({
resources: [
{
uri: "file:///project/README.md",
name: "Project README",
mimeType: "text/markdown",
},
],
}));Prompts — reusable, parameterised prompt templates the host can surface to users as slash commands or shortcuts.
You don't need to write every integration from scratch. The MCP ecosystem already has production-ready servers for:
| Server | What it exposes |
|---|---|
@modelcontextprotocol/server-filesystem | Read/write local files |
@modelcontextprotocol/server-github | Repos, PRs, issues, code search |
@modelcontextprotocol/server-postgres | Query a Postgres database |
@modelcontextprotocol/server-brave-search | Web search via Brave API |
@modelcontextprotocol/server-memory | Persistent key-value knowledge graph |
Install any of these the same way you'd connect your custom server — add an entry to claude_desktop_config.json and restart.
Before MCP, giving an agent access to your internal tools meant writing a bespoke integration for every model and every framework. With MCP, you write the server once and any compliant host — Claude Desktop, GitHub Copilot, your own custom runner — can use it immediately. As the ecosystem grows, the investment compounds.
Tomorrow we look at the OpenAI Assistants API: a hosted alternative to building your own tool loop, with built-in file storage, Code Interpreter, and retrieval — all managed by OpenAI's servers.