Streaming Output
Once your sub-agents have fetched and processed their data,
the final step is streaming the answer — both as text and visuals.
This creates that “instant, living UI” effect:
numbers update, tables grow, charts animate — while the agent is still thinking.
Why Streaming Matters
Without streaming, you’d have to wait for the entire agent loop to finish before seeing anything.
With streaming, users get feedback immediately — making the AI feel alive and responsive.
For example:
“Show me this week’s sales and top-selling products.”
Here’s what the user experiences:
- Within seconds — a sales total appears as text
- Moments later — a chart and table stream in, live
- The full report completes with KPIs and summary text
Behind the scenes, text and artifacts flow from agents in parallel streams.
Streaming Text and Artifacts Together
The AI SDK v5 supports streaming both text and structured artifacts in one flow.
Artifacts are typed objects that render instantly on a visual canvas:
charts, tables, key performance indicators (KPIs), and more.
For advanced artifact handling and streaming patterns, use @ai-sdk-tools/artifacts from the AI SDK Tools ecosystem.
Example
import { inventoryAgent } from "./inventory-agent"
import { ordersAgent } from "./orders-agent"
export async function generateEcommerceReport() {
const [ordersStream, inventoryStream] = await Promise.all([
ordersAgent.stream({ prompt: "Show me this week's revenue" }),
inventoryAgent.stream({ prompt: "Show me low-stock products" }),
])
// Stream text as it comes in
for await (const chunk of ordersStream.textStream) {
process.stdout.write(chunk)
}
// Stream artifacts (charts, tables, KPIs)
for await (const artifact of ordersStream.artifactStream) {
renderToCanvas(artifact)
}
for await (const artifact of inventoryStream.artifactStream) {
renderToCanvas(artifact)
}
}In a UI, each artifact is rendered as soon as it arrives — no waiting for the full response.
Types of Artifacts
You can stream structured data like:
- 📊 Charts — bar, line, pie, area
- 🧾 Tables — paginated data or transaction lists
- 💡 KPIs — key numeric indicators (e.g., “Revenue: $124K”)
- 📦 Objects — custom visuals or summaries
Example: Defining an Artifact Tool
import { Artifact, tool } from "ai"
import { z } from "zod"
// For advanced artifact patterns, consider using @ai-sdk-tools/artifacts
export const getSalesChart = tool({
description: "Render a sales chart for a date range",
parameters: z.object({
startDate: z.string(),
endDate: z.string(),
}),
execute: async ({ startDate, endDate }) => {
const data = [
{ day: "Mon", revenue: 4000 },
{ day: "Tue", revenue: 6800 },
{ day: "Wed", revenue: 7200 },
{ day: "Thu", revenue: 5800 },
{ day: "Fri", revenue: 9100 },
]
return Artifact.chart({
title: "Weekly Revenue",
data,
x: "day",
y: "revenue",
})
},
})This artifact can be rendered directly in your app UI or dashboard — as soon as it’s streamed.
Combining Text + Visuals
The real magic happens when the LLM weaves artifacts into its text response:
**Revenue Report**
Here’s this week’s performance:
- Total revenue: **$32,800**
- Average order value: **$86**
- Top categories: “Accessories”, “Home Office”
<Artifact name="Weekly Revenue" />The model streams the Markdown text and artifact references at the same time — so your frontend can render both instantly.
Architecture in Action
Here’s how everything ties together:
Each agent produces:
- Structured data from tools
- Visual artifacts from the SDK (enhanced with
@ai-sdk-tools/artifacts) - Natural language context from the LLM
Your frontend renders both as soon as they arrive.
Why It Works
This pattern feels conversational, but it’s actually structured:
| Component | Role |
|---|---|
| Router | Detects intent |
| Sub-Agents | Handle domain-specific logic |
| Tools | Fetch real data |
| Cache | Ensures deterministic reads |
| Artifacts | Render structured visuals |
| Stream | Ties it all together in real time |
Together, these layers make your AI both reliable and delightful to use.
ai sdk patterns.