Explore Topics
Topic
50 patterns

AI Agent Builder

Ship AI agents faster with production-ready patterns. Each pattern includes full source code, live previews, and integration guides for the Vercel AI SDK. From simple routing agents to complex orchestration systems — build, customize, and deploy.

7 featured of 50 patterns

Featured Patterns

7 curated
Agent Routing Pattern preview

The routing pattern is the front door of any multi-agent system. Instead of sending every user message to a single monolithic prompt, it classifies the input first and dispatches it to a specialized sub-agent — each with its own system prompt, model, and toolset.

At the core is a classification step powered by generateObject. A Zod schema defines the possible intent categories (like "technical", "billing", or "sales"), and a fast, inexpensive model makes the routing decision. The downstream agent then handles the actual response with a more capable model if needed.

The key architectural insight is separation of classification from generation. The classifier runs something like GPT-4o-mini to keep latency under 200ms, while the responding agent can use a heavier model for quality. This keeps costs manageable at scale — you only pay for expensive inference on the messages that need it.

This pattern also includes load balancing across providers and graceful fallback handling. If the primary model is unavailable, the router can redirect to an alternative without the user noticing. Use this when your application serves multiple distinct user intents that benefit from specialized prompts or different model configurations.

APIsgenerateObjectstreamTextconvertToModelMessagesnew Agenttool(stepCountIs
Servicesopenaiperplexitydeepseek
Tagsaiagentsroutingai-sdk
Sub-Agent Orchestrator preview

The sub-agent orchestrator demonstrates how to build a custom Agent class that routes queries to specialized child agents. Rather than one agent trying to handle everything, the orchestrator delegates to focused sub-agents — research, analysis, and support — each with their own capabilities.

What makes this pattern powerful is the options passing mechanism. Each sub-agent receives typed configuration that controls its behavior, and the orchestrator coordinates their outputs into a structured response using Output.object. This gives you type-safe orchestration without sacrificing flexibility.

The pattern uses the ToolLoopAgent abstraction, which handles the agent execution loop automatically — calling tools, checking results, and deciding when to stop. The orchestrator sits above this loop, making routing decisions and merging results.

Build on this pattern when you need a hub-and-spoke architecture: a central coordinator that understands the shape of the problem and delegates to specialists. It scales naturally — adding a new capability means adding a new sub-agent, not rewriting the core logic.

APIsToolLoopAgentcreateAgentUIStreamResponsetool(Output.objectstepCountIsgatewayInferAgentUIMessage
Servicesopenaiexa
Tagsaiagentsorchestratorai-sdkcustom-agentsub-agentsroutingoptionsstructured-output
HIL Tool Approval Basic preview

The human-in-the-loop pattern adds a confirmation gate before the AI executes sensitive actions. Instead of letting the agent autonomously call tools, the system pauses execution, presents the proposed action to the user, and waits for explicit approval before proceeding.

This is built on the AI SDK's toolCallConfirmation mechanism. When the agent decides to call a tool, the UI renders a confirmation dialog showing exactly what the tool will do — the function name, parameters, and a human-readable description. The user can approve, reject, or modify the parameters before execution continues.

The key design decision is where to draw the confirmation boundary. Not every tool call needs approval — reading data is usually safe, but writing, deleting, or sending external requests should require confirmation. The pattern lets you configure which tools are gated and which run automatically.

Use this pattern in any production agent that takes real-world actions: sending emails, modifying databases, calling external APIs, or making purchases. The small latency cost of human confirmation is far cheaper than the cost of an AI making an irreversible mistake.

APIsstreamTextgatewaystepCountIsconvertToModelMessagestool(tools:
Servicesopenai
Tagsaihuman-in-the-loopai-sdktool-approvalchat-interfacerate-limitingsafe-aiworkflow-management
HIL Needs Approval preview

The tool approval pattern extends human-in-the-loop with a structured approval workflow inside a chat interface. When the agent proposes a tool call, the chat renders an inline approval card with action details, and the conversation pauses until the user responds.

What distinguishes this from a simple confirmation dialog is the conversational context. The approval request appears as part of the chat history, so the user can see exactly why the agent wants to take the action. They can ask follow-up questions, request modifications, or reject the action — all within the same conversation flow.

The pattern uses the AI SDK's tool() API with an approval step injected into the execution pipeline. The tool definition includes metadata that the UI uses to render a rich approval card — not just raw parameters, but a human-readable summary of the intended action.

This is the right pattern for customer-facing agents where trust and transparency matter. Financial advisors, healthcare assistants, admin bots — any context where the user needs to understand and authorize what the AI is about to do before it happens.

APIsnew Agenttool(stepCountIsExperimental_Agenttools:gatewayUIToolInvocation
Servicesgoogle-ai
Tagsaiagentstool-approvalasync-generatorapproval-patternsai-sdk-v5streamingreal-time-feedbacktool-executionworkflow-management
Orchestrator-Worker Pattern preview

The orchestrator-worker pattern tackles complex, multi-phase projects by breaking them into discrete tasks assigned to specialized worker agents. An orchestrator agent plans the work, assigns tasks, tracks progress, and synthesizes the final deliverables.

This is the most advanced coordination pattern in the collection. The orchestrator uses new Agent with strongly-typed tools to manage the full project lifecycle: planning phases, assigning workers, monitoring progress, resolving blockers, and collecting results. Each worker agent is scoped to a specific domain — design, engineering, testing — with tools and prompts tailored to their specialty.

The key difference from simple routing is state management. The orchestrator maintains a project state object that tracks which tasks are complete, which are blocked, and what the overall progress looks like. Workers report back through structured tool outputs, and the orchestrator decides what to do next.

This pattern shines for multi-step, multi-discipline projects: software feature development, content production pipelines, research coordination, or anything that requires planning before execution. It is more complex to set up than routing or parallel processing, but the payoff is full lifecycle management of non-trivial work.

APIsnew Agenttool(stepCountIsExperimental_Agenttools:gateway
Servicesopenai
Tagsaiagentsorchestratorworkerproject-managementcoordinationai-sdk-v5strongly-typedstreamingdeliverables
Research Agent Chain preview

The agent-to-agent workflow demonstrates how to chain multiple agents into a sequential pipeline where each agent's output feeds into the next. Unlike parallel processing where agents work independently, this pattern creates a linear assembly line of specialized AI steps.

The first agent analyzes the input and produces a structured intermediate result. The second agent takes that result and transforms it further — adding context, reformatting, or enriching with additional data. Each agent has its own system prompt, model, and tool set optimized for its specific stage.

The pattern uses generateText calls chained with await, passing the output of one directly into the prompt of the next. The intermediate results are typed with Zod schemas, giving you compile-time safety across the agent boundary. If the first agent's output schema changes, TypeScript catches the mismatch immediately.

Build on this pattern for multi-stage content pipelines: draft → review → polish, or extract → enrich → summarize. The key advantage over a single prompt is that each agent can be tested, debugged, and improved independently. You can swap out one stage without touching the others.

APIsToolLoopAgentcreateAgentUIStreamResponsetool(Output.objectstepCountIsprepareStepgatewayInferAgentUIMessage
Servicesopenaiexa
Tagsaiagentschainai-sdkstructured-outputagent-chainexasequential-agentsresearchsynthesis
Multi-Step Tool Pattern preview

The multi-step tool pattern gives an agent the ability to reason through complex problems iteratively — calling tools, evaluating results, and deciding what to do next, all within a single conversation turn.

Unlike simple tool calling where the model makes one tool call and returns, this pattern uses stepCountIs to allow the agent to take multiple steps. The agent might search the web, analyze the results, search again with a refined query, and then synthesize everything into a final answer. Each step builds on the previous one.

The pattern includes real-time web search and news integration through typed tools. The agent can pull current information, cross-reference sources, and build up context progressively. The hasToolCall utility lets you check what tools were invoked, enabling conditional logic in the UI.

This is the pattern to reach for when a single tool call is not enough. Research tasks, market analysis, technical debugging — anything that requires the kind of iterative exploration a human would do. The agent decides how many steps it needs; you just set the upper bound.

APIsnew Agenttool(stepCountIsgenerateObjectExperimental_Agenttools:gatewayhasToolCall
Servicesperplexityopenaihackernews
Tagsaiagentstoolsmulti-stepweb-searchnewsanalysisai-sdk-v5strongly-typedstreaming

All Patterns

43 more
01
Parallel Processing Pattern
INT
02
HIL Agentic Context Builder
INT
03
AI SDK Gemini Flash Text
INT
04
AI SDK Gemini Flash Image
INT
05
AI SDK Gemini Flash Image Edit
INT
06
AI SDK Gemini Flash Image Merge
INT
07
Evaluator-Optimizer Pattern
ADV
08
HIL Inquire Multiple Choice
INT
09
HIL Inquire Text Input
INT
10
Tool Input Lifecycle Hooks
BEG
11
Preliminary Tool Results
BEG
12
Tool API Context
BEG
13
Tool Call Repair
INT
14
Dynamic Tool
INT
15
Structured Agent Output: Output.choice
INT
16
Structured Agent Output: Output.array
INT
17
Chat-Base Clone
INT
18
AI Form Generator
INT
19
Human in the Loop Plan Builder Agent
ADV
20
Branding Agent
INT
21
Competitor Research Agent
INT
22
Data Analysis Agent
INT
23
Accessibility Audit Agent
INT
24
SEO Audit
BEG
25
Reddit Product Validation Agent
INT
26
Levee Brand Strategy
ADV
27
Generate Speech (OpenAI)
28
Transcribe Audio (OpenAI)
29
Generate Text
30
Stream Text
31
Streaming Structured Output
32
OpenAI Structured Output
33
Claude Structured Output
34
Gemini Structured Output
35
Generate Image (OpenAI)
36
Generate Image (Fal.ai)
37
Generate Speech (ElevenLabs)
38
Transcribe Audio (ElevenLabs)
39
Search - Exa AI (robust)
40
Search - Firecrawl (robust)
41
Scrape - Cheerio (lightweight)
42
Scrape - Jina AI (advanced)
43
Scrape - Markdown.new (free)
End of AI Agent Builder