3. Data & Cache Layer

PreviousNext

Ensure reliable, deterministic data access for agents using Zod contracts and AI SDK caching.

Data & Cache Layer

The Data & Cache Layer is where your agents connect to real data — safely and predictably.

Every tool in your system should return typed, validated, and cacheable data.
This ensures that when the same question is asked twice, the same numbers come back — not “LLM guesses.”


The Problem

LLMs are powerful at reasoning, but they shouldn’t invent data.

When an agent says:

“Your total revenue is $124,000,”

that number must come from your system of record —
not from the model’s memory or reasoning.

That’s where deterministic tools and caching come in.


Deterministic Tools

Each agent’s tools are defined with Zod contracts.
They validate inputs and outputs so every tool call is stable and typed.

tools-with-zod.ts
import { tool } from "ai"
import { z } from "zod"
 
// Example: stable API connector for order data
export const getOrders = tool({
  description: "Fetch total revenue and order count",
  parameters: z.object({
    startDate: z.string(),
    endDate: z.string(),
  }),
  output: z.object({
    startDate: z.string(),
    endDate: z.string(),
    revenue: z.number(),
    totalOrders: z.number(),
  }),
  execute: async ({ startDate, endDate }) => {
    const response = await fetch(
      `https://api.yourstore.com/orders?start=${startDate}&end=${endDate}`
    )
    const data = await response.json()
    return {
      startDate,
      endDate,
      revenue: data.revenue,
      totalOrders: data.count,
    }
  },
})

Inputs and outputs are guaranteed — so your cache, UI, and analytics all trust the same shape.


Caching for Reliability

Every tool call in the AI SDK can use a deterministic cache key.

That key combines:

tool + version + orgId + role + params

When the same tool is called with the same parameters, the SDK instantly returns the cached result — no recomputation, no API calls.

⚡ This makes your system fast, consistent, and cheap to run.

Example

cached-tool-call.ts
const key = `getOrders:v1:org123:read:${JSON.stringify({ startDate, endDate })}`
 
// AI SDK handles caching automatically
const result = await sdkCache.getOrSet(key, async () => {
  return await getOrders.execute({ startDate, endDate })
})
 
console.log(result) // same data, every time

Safe Write Operations

For any write tool (like updating inventory, sending invoices, or restocking items), add a preview + confirm flow to prevent unintended changes.

safe-write-tools.ts
export const updateInventory = tool({
  description: "Update stock levels for a product",
  parameters: z.object({
    productId: z.string(),
    quantity: z.number(),
  }),
  execute: async ({ productId, quantity }) => {
    // Preview step — show what would happen
    return {
      preview: `Stock for product ${productId} will be set to ${quantity}.`,
      confirm: async () => {
        await db.products.update({ id: productId, stock: quantity })
        return { success: true }
      },
    }
  },
})

💡 Rule of thumb All write operations must be previewed before being confirmed. This keeps the user in control — and prevents surprises.


Why This Layer Matters

This layer guarantees that your agents:

  • Never hallucinate numbers
  • Return consistent results for identical inputs
  • Handle retries safely (idempotent reads, controlled writes)
  • Work offline from cached responses when possible

The Core Idea

Numbers come from tools — never from the LLM.

By separating reasoning (LLM) from data (tools + cache), you get the best of both worlds:

  • The model can explain, summarize, and analyze
  • But your tools guarantee the facts

That’s the foundation of a reliable AI system.