Model Context Protocol (MCP): Standardising Agent Tool Access

Every AI framework invented its own way to expose tools to models — until MCP. Anthropic's Model Context Protocol is rapidly becoming the USB-C of agent tool connectivity. This post builds a minimal MCP server in TypeScript and shows you how to connect it to Claude Desktop or GitHub Copilot.

5 min read

Until recently, every AI framework solved the "how does a model call an external tool?" problem differently. LangChain had its tool format. The OpenAI Agents SDK had another. Claude had its own. If you wanted your file-system browser to work with three different agents, you wrote three different integrations.

Anthropic's Model Context Protocol (MCP) — released in late 2024 and now adopted by OpenAI, Google DeepMind, and dozens of tool vendors — is a single open standard that any model host and any tool server can speak. Write a tool once; connect it to every compatible agent.

The Architecture: Host · Client · Server

MCP has three roles:

RoleWhat it doesExamples
HostThe AI application that the user interacts withClaude Desktop, GitHub Copilot, VS Code
ClientLives inside the host; manages the connection to one MCP serverBuilt into the host — you don't write this
ServerExposes tools, resources, and prompts over MCPYour custom server, or an open-source one

The host and client are typically the same product (e.g. Claude Desktop). You only need to build the server.

Communication between client and server happens over stdio (for local servers) or HTTP with Server-Sent Events (for remote servers). The protocol is JSON-RPC 2.0 under the hood, but the SDK hides all of that.

Install the SDK

Terminal
npm install @modelcontextprotocol/sdk
Terminal
# TypeScript support
npm install -D typescript @types/node ts-node

Building a Minimal MCP Server

This server exposes one tool: get_current_time, which returns the current UTC time for a given timezone. Simple, but enough to see the full MCP lifecycle.

// server.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
 
const server = new Server(
  { name: "time-server", version: "1.0.0" },
  { capabilities: { tools: {} } },
);
 
// Declare what tools are available
server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [
    {
      name: "get_current_time",
      description:
        "Get the current UTC time, optionally formatted for a timezone.",
      inputSchema: {
        type: "object",
        properties: {
          timezone: {
            type: "string",
            description:
              "IANA timezone string, e.g. 'Europe/London'. Defaults to UTC.",
          },
        },
        required: [],
      },
    },
  ],
}));
 
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name !== "get_current_time") {
    throw new Error(`Unknown tool: ${request.params.name}`);
  }
 
  const tz = (request.params.arguments?.timezone as string) ?? "UTC";
  const now = new Date().toLocaleString("en-GB", {
    timeZone: tz,
    dateStyle: "full",
    timeStyle: "long",
  });
 
  return {
    content: [{ type: "text", text: `Current time in ${tz}: ${now}` }],
  };
});
 
// Start listening on stdio
const transport = new StdioServerTransport();
await server.connect(transport);

That's the entire server — about 50 lines. The SDK handles the JSON-RPC handshake, capability negotiation, and schema validation.

Connecting to Claude Desktop

  1. Build your server: npx tsc server.ts --outDir dist
  2. Open Claude Desktop → Settings → Developer → Edit Config
  3. Add your server to claude_desktop_config.json:
{
  "mcpServers": {
    "time-server": {
      "command": "node",
      "args": ["/absolute/path/to/dist/server.js"]
    }
  }
}
  1. Restart Claude Desktop.

You'll see a hammer icon in the chat input. Ask Claude "What time is it in Tokyo?" and it will call your get_current_time tool automatically.

Exposing Resources and Prompts

MCP supports three capability types, not just tools:

Tools — functions the model can call (what we built above).

Resources — data sources the model can read, like files or database rows. Useful for giving the model access to documents without stuffing them into the system prompt:

server.setRequestHandler(ListResourcesRequestSchema, async () => ({
  resources: [
    {
      uri: "file:///project/README.md",
      name: "Project README",
      mimeType: "text/markdown",
    },
  ],
}));

Prompts — reusable, parameterised prompt templates the host can surface to users as slash commands or shortcuts.

Open-Source MCP Servers to Use Today

You don't need to write every integration from scratch. The MCP ecosystem already has production-ready servers for:

ServerWhat it exposes
@modelcontextprotocol/server-filesystemRead/write local files
@modelcontextprotocol/server-githubRepos, PRs, issues, code search
@modelcontextprotocol/server-postgresQuery a Postgres database
@modelcontextprotocol/server-brave-searchWeb search via Brave API
@modelcontextprotocol/server-memoryPersistent key-value knowledge graph

Install any of these the same way you'd connect your custom server — add an entry to claude_desktop_config.json and restart.

Why This Matters for Agentic Development

Before MCP, giving an agent access to your internal tools meant writing a bespoke integration for every model and every framework. With MCP, you write the server once and any compliant host — Claude Desktop, GitHub Copilot, your own custom runner — can use it immediately. As the ecosystem grows, the investment compounds.

What's Next

Tomorrow we look at the OpenAI Assistants API: a hosted alternative to building your own tool loop, with built-in file storage, Code Interpreter, and retrieval — all managed by OpenAI's servers.