Yesterday we defined what an agent is: a perceive → reason → act loop where the model decides the next step, not the developer.
Today we build one — a weather-checking agent that decides when to call a tool, reads the result, and reports back.
By the end of this post you'll have a working script in both TypeScript and Python, and you'll understand every object that flows between your code and the model.
A user asks: "What's the weather like in Tokyo and should I pack an umbrella?"
The agent must:
get_weather tool.This is the smallest useful example of agentic behaviour — it has exactly one tool, one goal, and two turns. That simplicity makes every moving part visible.
We'll use the Open-Meteo API — it's free, requires no API key, and returns real forecast data.
You'll need:
mkdir weather-agent && cd weather-agent
npm init -y
npm install @anthropic-ai/sdk zodCreate a .env file:
ANTHROPIC_API_KEY=sk-ant-...mkdir weather-agent && cd weather-agent
python -m venv .venv && source .venv/bin/activate
pip install anthropic python-dotenvCreate a .env file:
ANTHROPIC_API_KEY=sk-ant-...Before we write any agent code, we need to describe the tools the agent can use. Tools are declared as JSON Schema objects — the same format used by OpenAI, Gemini, and most other providers.
// TypeScript — tool definition
const tools = [
{
name: "get_weather",
description:
"Returns current weather conditions for a given city, including temperature, precipitation probability, and a short condition summary. Call this whenever the user asks about weather in a specific location.",
input_schema: {
type: "object",
properties: {
city: {
type: "string",
description: "The city name to fetch weather for, e.g. 'Tokyo'",
},
},
required: ["city"],
},
},
];A few things to note:
description is the most important field. The model reads it to decide whether and when to call the tool. Be precise — vague descriptions lead to wrong tool calls or missed tool calls.input_schema follows JSON Schema. The model generates a JSON object that matches this schema when it calls the tool.The agent declares what tools exist; your code actually runs them. Here's the weather fetcher:
// TypeScript — tool implementation
async function get_weather(city: string): Promise<string> {
// Geocode the city name to lat/lon using Open-Meteo's free geocoding API
const geoRes = await fetch(
`https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(city)}&count=1`,
);
const geoData = await geoRes.json();
if (!geoData.results?.length) {
return JSON.stringify({ error: `Could not find location for "${city}"` });
}
const { latitude, longitude, name, country } = geoData.results[0];
// Fetch current weather from Open-Meteo
const weatherRes = await fetch(
`https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}` +
`¤t=temperature_2m,precipitation_probability,weathercode&timezone=auto`,
);
const weatherData = await weatherRes.json();
const current = weatherData.current;
// Map WMO weather code to a human-readable string
const condition = wmoCodeToCondition(current.weathercode);
return JSON.stringify({
location: `${name}, ${country}`,
temperature_c: current.temperature_2m,
precipitation_probability_pct: current.precipitation_probability,
condition,
});
}
function wmoCodeToCondition(code: number): string {
if (code === 0) return "Clear sky";
if (code <= 3) return "Partly cloudy";
if (code <= 48) return "Fog";
if (code <= 67) return "Rain";
if (code <= 77) return "Snow";
if (code <= 82) return "Rain showers";
if (code <= 99) return "Thunderstorm";
return "Unknown";
}The function returns a JSON string — not a parsed object. That's intentional: the result goes back to the model as a string in the tool_result message, and JSON is easy for the model to read.
Here is the complete agent. Read it once, then we'll walk through each section.
// TypeScript — agent.ts
import Anthropic from "@anthropic-ai/sdk";
import * as dotenv from "dotenv";
dotenv.config();
const client = new Anthropic();
async function runAgent(userMessage: string) {
const messages: Anthropic.MessageParam[] = [
{ role: "user", content: userMessage },
];
console.log(`\nUser: ${userMessage}\n`);
// ── Agent loop ─────────────────────────────────────────────────────────────
while (true) {
const response = await client.messages.create({
model: "claude-opus-4-5",
max_tokens: 1024,
tools,
messages,
});
console.log(`[stop_reason: ${response.stop_reason}]`);
// ── Case 1: model is done ───────────────────────────────────────────────
if (response.stop_reason === "end_turn") {
const text = response.content
.filter((b) => b.type === "text")
.map((b) => (b as Anthropic.TextBlock).text)
.join("");
console.log(`\nAssistant: ${text}`);
return text;
}
// ── Case 2: model wants to call a tool ──────────────────────────────────
if (response.stop_reason === "tool_use") {
// Add the assistant's message (including tool_use blocks) to history
messages.push({ role: "assistant", content: response.content });
// Execute every tool the model requested (there could be more than one)
const toolResults: Anthropic.ToolResultBlockParam[] = [];
for (const block of response.content) {
if (block.type !== "tool_use") continue;
console.log(
`[tool call] ${block.name}(${JSON.stringify(block.input)})`,
);
let result: string;
if (block.name === "get_weather") {
result = await get_weather((block.input as { city: string }).city);
} else {
result = JSON.stringify({ error: `Unknown tool: ${block.name}` });
}
console.log(`[tool result] ${result}`);
toolResults.push({
type: "tool_result",
tool_use_id: block.id,
content: result,
});
}
// Add tool results as a user message and loop again
messages.push({ role: "user", content: toolResults });
}
}
}
// Run it
runAgent("What's the weather like in Tokyo? Should I pack an umbrella?");Let's trace exactly what happens when you run this.
messages = [
{ role: "user", content: "What's the weather like in Tokyo? Should I pack an umbrella?" }
]We call client.messages.create(). The model sees the user message and the tool definitions. It reasons: "I need live weather data for Tokyo. I have a get_weather tool. I'll call it."
Response:
{
"stop_reason": "tool_use",
"content": [
{
"type": "tool_use",
"id": "toolu_01XyzAbc",
"name": "get_weather",
"input": { "city": "Tokyo" }
}
]
}The model stops and signals tool_use — it's waiting for us to run the tool.
Our code detects stop_reason === "tool_use", runs get_weather("Tokyo"), and gets back something like:
{
"location": "Tokyo, Japan",
"temperature_c": 18.4,
"precipitation_probability_pct": 72,
"condition": "Rain"
}We append the assistant's message (the tool call) to history, then append the result as a user message with role tool_result. The conversation now looks like:
messages = [
{ role: "user", content: "What's the weather like in Tokyo?..." },
{ role: "assistant", content: [{ type: "tool_use", name: "get_weather", ... }] },
{ role: "user", content: [{ type: "tool_result", tool_use_id: "toolu_01XyzAbc", content: "{...}" }] },
]We call client.messages.create() again with the updated history. The model now has the weather data and can answer the user's original question.
Response:
{
"stop_reason": "end_turn",
"content": [
{
"type": "text",
"text": "It's currently 18°C and rainy in Tokyo, with a 72% chance of precipitation. I'd definitely recommend packing an umbrella — and a light jacket."
}
]
}stop_reason is end_turn, so the loop exits and we print the answer.
The structure is identical. Here it is for reference:
# Python — agent.py
import json
import urllib.request
from anthropic import Anthropic
from dotenv import load_dotenv
load_dotenv()
client = Anthropic()
tools = [
{
"name": "get_weather",
"description": (
"Returns current weather conditions for a given city, including temperature, "
"precipitation probability, and a short condition summary. Call this whenever "
"the user asks about weather in a specific location."
),
"input_schema": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city name to fetch weather for, e.g. 'Tokyo'",
}
},
"required": ["city"],
},
}
]
def wmo_code_to_condition(code: int) -> str:
if code == 0: return "Clear sky"
if code <= 3: return "Partly cloudy"
if code <= 48: return "Fog"
if code <= 67: return "Rain"
if code <= 77: return "Snow"
if code <= 82: return "Rain showers"
if code <= 99: return "Thunderstorm"
return "Unknown"
def get_weather(city: str) -> str:
geo_url = (
f"https://geocoding-api.open-meteo.com/v1/search"
f"?name={urllib.parse.quote(city)}&count=1"
)
with urllib.request.urlopen(geo_url) as r:
geo = json.loads(r.read())
if not geo.get("results"):
return json.dumps({"error": f'Could not find location for "{city}"'})
result = geo["results"][0]
lat, lon, name, country = result["latitude"], result["longitude"], result["name"], result["country"]
weather_url = (
f"https://api.open-meteo.com/v1/forecast"
f"?latitude={lat}&longitude={lon}"
f"¤t=temperature_2m,precipitation_probability,weathercode&timezone=auto"
)
with urllib.request.urlopen(weather_url) as r:
weather = json.loads(r.read())
current = weather["current"]
return json.dumps({
"location": f"{name}, {country}",
"temperature_c": current["temperature_2m"],
"precipitation_probability_pct": current["precipitation_probability"],
"condition": wmo_code_to_condition(current["weathercode"]),
})
def run_agent(user_message: str) -> str:
messages = [{"role": "user", "content": user_message}]
print(f"\nUser: {user_message}\n")
while True:
response = client.messages.create(
model="claude-opus-4-5",
max_tokens=1024,
tools=tools,
messages=messages,
)
print(f"[stop_reason: {response.stop_reason}]")
if response.stop_reason == "end_turn":
text = "".join(b.text for b in response.content if b.type == "text")
print(f"\nAssistant: {text}")
return text
if response.stop_reason == "tool_use":
messages.append({"role": "assistant", "content": response.content})
tool_results = []
for block in response.content:
if block.type != "tool_use":
continue
print(f"[tool call] {block.name}({block.input})")
if block.name == "get_weather":
result = get_weather(block.input["city"])
else:
result = json.dumps({"error": f"Unknown tool: {block.name}"})
print(f"[tool result] {result}")
tool_results.append({
"type": "tool_result",
"tool_use_id": block.id,
"content": result,
})
messages.append({"role": "user", "content": tool_results})
if __name__ == "__main__":
import urllib.parse
run_agent("What's the weather like in Tokyo? Should I pack an umbrella?")The model decides; your code executes. You never write if user_asked_about_weather: call_weather_api(). The model reads the tool descriptions and decides when to call them. Your job is to dispatch the call and return the result.
Tool results go back as user messages. This is the part that trips people up. The tool_result content block has role "user" — not "tool" or "system". The conversation is always a strict alternating sequence of user and assistant turns.
The loop is just a while True. There's no framework magic here. The agent stops when stop_reason is end_turn. You can add your own stopping conditions (max turns, budget limits, error thresholds) by checking them at the top of the loop.
Tomorrow we'll build the same agent using the OpenAI Agents SDK and compare the two APIs side by side: same task, different idioms, so you can choose the one that fits your stack.
Day 4 will close the Days 2–4 block with a direct comparison post: architecture diagrams, API surface differences, and a recommendation matrix for choosing between them.