A common pattern when people first discover agentic AI is to build one agent and give it every tool — search, write, code, review. It works, but barely. The same model that's great at web research is not automatically great at clean prose. Specialisation matters.
Multi-agent orchestration means breaking a complex task into sub-tasks, assigning each to a specialised agent, and wiring them together so their outputs flow into each other's inputs. LangGraph is the cleanest library I've found for expressing this as code.
A chain is a linear sequence: A → B → C. It's simple but brittle — you can't branch, retry, or loop back.
A graph lets you:
LangGraph models your agent pipeline as a StateGraph: nodes are agent steps, edges are transitions between them, and shared state flows through the whole graph.
pip install langgraph langchain-anthropic
export ANTHROPIC_API_KEY="sk-ant-..."State is a TypedDict that every node can read and write. Design it to hold everything any node might need:
from typing import TypedDict, Annotated
import operator
class BlogState(TypedDict):
topic: str # initial input
research: str # filled by Researcher
draft: str # filled by Writer
feedback: str # filled by Editor
approved: bool # controls the loop
revision_count: Annotated[int, operator.add] # auto-incrementedEach node is an async function that receives the current state and returns a dict of updates:
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", max_tokens=2048)
async def researcher(state: BlogState) -> dict:
"""Find key facts and talking points for the topic."""
response = await llm.ainvoke([
{
"role": "user",
"content": (
f"You are a research assistant. Gather the 5 most important facts, "
f"concepts, and practical examples about: {state['topic']}. "
f"Be specific and include concrete details a developer would care about."
),
}
])
return {"research": response.content}
async def writer(state: BlogState) -> dict:
"""Write a blog draft using the research."""
response = await llm.ainvoke([
{
"role": "user",
"content": (
f"You are a technical blog writer. Write a 600-word blog post about "
f"'{state['topic']}' using the following research:\n\n{state['research']}\n\n"
f"{'Previous feedback to address: ' + state['feedback'] if state.get('feedback') else ''}"
f"\n\nWrite in a clear, practical style for software developers."
),
}
])
return {"draft": response.content, "revision_count": 1}
async def editor(state: BlogState) -> dict:
"""Review the draft and decide whether to approve or request revisions."""
response = await llm.ainvoke([
{
"role": "user",
"content": (
f"You are a strict technical editor. Review this blog post draft and decide "
f"if it's ready to publish.\n\nDraft:\n{state['draft']}\n\n"
f"If approved, respond with exactly: APPROVED\n"
f"Otherwise, respond with: REVISE: <specific feedback in 2-3 sentences>"
),
}
])
content = response.content
if content.strip().startswith("APPROVED"):
return {"approved": True, "feedback": ""}
else:
feedback = content.replace("REVISE:", "").strip()
return {"approved": False, "feedback": feedback}from langgraph.graph import StateGraph, END
def should_revise(state: BlogState) -> str:
"""Conditional edge: loop back to Writer or finish."""
if state["approved"]:
return "finish"
if state.get("revision_count", 0) >= 2:
# Safety valve: stop after 2 revisions regardless
return "finish"
return "revise"
# Build the graph
builder = StateGraph(BlogState)
builder.add_node("researcher", researcher)
builder.add_node("writer", writer)
builder.add_node("editor", editor)
# Linear flow: researcher → writer → editor
builder.set_entry_point("researcher")
builder.add_edge("researcher", "writer")
builder.add_edge("writer", "editor")
# Conditional loop: editor either approves (→ END) or sends back to writer
builder.add_conditional_edges(
"editor",
should_revise,
{"finish": END, "revise": "writer"},
)
graph = builder.compile()import asyncio
async def main():
result = await graph.ainvoke({
"topic": "How RAG improves AI agent accuracy",
"research": "",
"draft": "",
"feedback": "",
"approved": False,
"revision_count": 0,
})
print("=== FINAL DRAFT ===")
print(result["draft"])
print(f"\nRevisions: {result['revision_count']}")
asyncio.run(main())The graph runs researcher → writer → editor. If the editor requests changes, it loops back to the writer (up to 2 times), then finishes.
LangGraph can render your pipeline as a Mermaid diagram:
from IPython.display import Image
Image(graph.get_graph().draw_mermaid_png())Or print the Mermaid source to paste into any Mermaid renderer:
print(graph.get_graph().draw_mermaid())This is surprisingly useful for communicating pipelines to non-technical stakeholders.
The researcher–writer–editor pipeline is a template. Common extensions:
add_edge from a split node to run multiple researcher agents simultaneously, then join their outputs before writing.human_review node that pauses the graph and waits for external input using LangGraph's interrupt mechanism.gpt-4o-mini for cheap research and claude-3-7-sonnet only for the final writing step to reduce cost.Tomorrow we focus on making agents production-ready: structured output, retry strategies, trace logging, and deployment options — everything you need to move from a working prototype to a reliable service.