ai-sdk-deepagent

Get Started

Build your first Deep Agent from installation to your first working agent

A comprehensive guide to building your first Deep Agent from installation to your first working agent.

What is Deep Agent?

Deep Agent extends basic LLM tool-calling with four core capabilities:

Planning Tool

write_todos - Break down complex tasks and track progress

Virtual Filesystem

Persistent state across tool calls

Subagent Spawning

task tool - Delegate work to specialized agents

Detailed Prompting

Context-aware instructions and tool descriptions

This combination enables agents to handle complex, multi-step tasks that would overwhelm simpler "shallow" agents.

Prerequisites

Before You Begin

Ensure you have the following ready:
  • Bun runtime (>= 1.0.0) - This package requires Bun for TypeScript features and performance
  • Node.js compatible environment
  • API Key for at least one of:
    • Anthropic (Claude) - Recommended
    • OpenAI (GPT)
    • Azure OpenAI
    • Or any AI SDK v6 compatible provider

Install Bun (if needed)

# Install Bun
curl -fsSL https://bun.sh/install | bash

# Verify installation
bun --version

Installation

# Create a new project
mkdir my-agent-project
cd my-agent-project
bun init -y

# Install the package
bun add ai-sdk-deep-agent

# Install AI SDK providers
bun add @ai-sdk/anthropic @ai-sdk/openai
cd your-project
bun add ai-sdk-deep-agent
# Install globally
bun add -g ai-sdk-deep-agent

# Now you can run from anywhere
deep-agent

Configuration

Set Up API Keys

Create a .env file in your project root:

.env
# Anthropic (recommended)
ANTHROPIC_API_KEY=sk-ant-your-key-here

# OpenAI (alternative)
OPENAI_API_KEY=sk-your-key-here

# Tavily (for web search tools - optional)
TAVILY_API_KEY=tvly-your-key-here

Choose a Model

The library requires AI SDK LanguageModel instances (not strings).
import { anthropic } from '@ai-sdk/anthropic';
import { openai } from '@ai-sdk/openai';

// Anthropic Claude (recommended)
const model1 = anthropic('claude-sonnet-4-5-20250929');

// Anthropic Claude 3.5 Sonnet (older)
const model2 = anthropic('claude-sonnet-4-20250514');

// Anthropic Claude Haiku (faster, cheaper)
const model3 = anthropic('claude-haiku-4-5-20251001');

// OpenAI GPT-5
const model4 = openai('gpt-5');

// OpenAI GPT-4o
const model5 = openai('gpt-4o');

Configure TypeScript

Ensure your tsconfig.json has these settings:

tsconfig.json
{
  "compilerOptions": {
    "target": "ESNext",
    "module": "ESNext",
    "moduleResolution": "bundler",
    "types": ["bun-types"],
    "strict": true
  }
}

Your First Agent

Basic Example

Create a file agent.ts:

agent.ts
import { createDeepAgent } from 'ai-sdk-deep-agent';
import { anthropic } from '@ai-sdk/anthropic';

// Create the agent
const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  systemPrompt: 'You are a helpful research assistant.',
});

// Run the agent
const result = await agent.generate({
  prompt: 'Research the benefits of TypeScript and write a summary to /summary.md',
  maxSteps: 10,
});

// Output the results
console.log('Response:', result.text);
console.log('Todos:', result.state.todos);
console.log('Files:', Object.keys(result.state.files));

Run it:

ANTHROPIC_API_KEY=your-key bun run agent.ts

Streaming Example

For real-time feedback, use streaming:

import { createDeepAgent } from 'ai-sdk-deep-agent';
import { anthropic } from '@ai-sdk/anthropic';

const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  systemPrompt: 'You are a helpful assistant.',
});

// Stream with events
for await (const event of agent.streamWithEvents({
  messages: [{ role: 'user', content: 'Create a project plan' }],
})) {
  switch (event.type) {
    case 'text':
      process.stdout.write(event.text);
      break;
    case 'tool-call':
      console.log(`\n[Tool: ${event.toolName}]`);
      break;
    case 'todos-changed':
      console.log('\n[Todos updated]');
      break;
    case 'file-written':
      console.log(`\n[File: ${event.path}]`);
      break;
    case 'done':
      console.log('\n[Done]');
      break;
  }
}

Understanding Core Concepts

1. Todos (Planning)

The write_todos tool enables task planning and tracking:

// Agent automatically uses todos when you prompt it to plan
const result = await agent.generate({
  prompt: `
    Create a plan for building a todo app:
    1. Use write_todos to break down the task
    2. Work through each task systematically
    3. Update todo status as you progress
  `,
});

// Access todo state
result.state.todos.forEach(todo => {
  console.log(`[${todo.status}] ${todo.content}`);
});

Todo Status Flow:

  • pending - Task not started
  • in_progress - Currently working on (only one at a time)
  • completed - Task finished
  • cancelled - Task abandoned

2. Virtual Filesystem

Agents can read, write, and edit files in a virtual filesystem:

const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  systemPrompt: 'You are a coding assistant.',
});

const result = await agent.generate({
  prompt: 'Create a TypeScript file at /utils/math.ts with some utility functions',
});

// Access created files
for (const [path, file] of Object.entries(result.state.files)) {
  console.log(`File: ${path}`);
  console.log(file.content.join('\n'));
}

Built-in Filesystem Tools:

  • ls - List files in a directory
  • read_file - Read file contents
  • write_file - Create a new file
  • edit_file - Replace text in existing file
  • glob - Find files matching pattern (e.g., **/*.ts)
  • grep - Search text within files

3. Subagents (Task Delegation)

Spawn specialized agents for complex subtasks:

import { createDeepAgent, type SubAgent } from 'ai-sdk-deep-agent';
import { anthropic } from '@ai-sdk/anthropic';

// Define a specialized research subagent
const researchSubagent: SubAgent = {
  name: 'researcher',
  description: 'Expert in research and data gathering',
  systemPrompt: 'You are a research specialist. Gather comprehensive information.',
};

const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  subagents: [researchSubagent],
});

// Agent can now delegate tasks to the research subagent
const result = await agent.generate({
  prompt: 'Research AI safety and compile a report',
});

Subagent Benefits:

  • Context Isolation - Prevents main agent context bloat
  • Parallel Execution - Multiple subagents can run simultaneously
  • Shared Filesystem - Subagents share files with parent
  • Independent History - Separate conversation history per subagent

Common Setup Patterns

Pattern 1: With Filesystem Persistence

Store files on disk for persistence across sessions:

import { createDeepAgent, FilesystemBackend } from 'ai-sdk-deep-agent';
import { anthropic } from '@ai-sdk/anthropic';

const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  backend: new FilesystemBackend({ rootDir: './workspace' }),
});

const result = await agent.generate({
  prompt: 'Create a project and save files',
});

// Files are now on disk at ./workspace/

Pattern 2: With Checkpointing (Session Persistence)

Resume conversations across sessions:

import { createDeepAgent, FileSaver } from 'ai-sdk-deep-agent';
import { anthropic } from '@ai-sdk/anthropic';

const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  checkpointer: new FileSaver({ dir: './.checkpoints' }),
});

const threadId = 'user-session-123';

// First session
for await (const event of agent.streamWithEvents({
  messages: [{ role: 'user', content: 'Create a plan' }],
  threadId,
})) {
  // ... handle events
  // Checkpoint automatically saved after each step
}

// Later: Resume the session
for await (const event of agent.streamWithEvents({
  messages: [{ role: 'user', content: 'Continue from where we left off' }],
  threadId, // Same threadId restores checkpoint
})) {
  // Agent has full context from previous session
}

Pattern 3: With Custom Tools

Add your own tools alongside built-in ones:

import { createDeepAgent } from 'ai-sdk-deep-agent';
import { anthropic } from '@ai-sdk/anthropic';
import { tool } from 'ai';
import { z } from 'zod';

// Define a custom tool
const weatherTool = tool({
  description: 'Get the current weather for a location',
  parameters: z.object({
    location: z.string().describe('City name'),
  }),
  execute: async ({ location }) => {
    // Fetch weather data
    return `Weather in ${location}: 72°F, sunny`;
  },
});

const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  tools: {
    get_weather: weatherTool,
  },
});

const result = await agent.generate({
  prompt: 'What is the weather in San Francisco?',
});

Pattern 4: With Command Execution (Sandbox)

Enable shell command execution:

import { createDeepAgent, LocalSandbox } from 'ai-sdk-deep-agent';
import { anthropic } from '@ai-sdk/anthropic';

const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  backend: new LocalSandbox({
    cwd: './workspace',
    timeout: 60000, // 60 second timeout
  }),
  // execute tool is automatically added when using LocalSandbox!
});

const result = await agent.generate({
  prompt: 'Create a package.json and run npm install',
});

Tavily API Key Required

Enable web research capabilities by setting your Tavily API key in .env
.env
TAVILY_API_KEY=tvly-your-key-here
import { createDeepAgent } from 'ai-sdk-deep-agent';
import { anthropic } from '@ai-sdk/anthropic';

// Web tools automatically enabled when TAVILY_API_KEY is set
const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
});

const result = await agent.generate({
  prompt: 'Research the latest React 19 features and summarize them',
});

Pattern 6: With Agent Memory

Give your agent persistent memory across conversations:

import { createDeepAgent } from 'ai-sdk-deep-agent';
import { anthropic } from '@ai-sdk/anthropic';

const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  agentId: 'my-coding-assistant',
  // Memory auto-loaded from:
  // - ~/.deepagents/my-coding-assistant/agent.md (user-level)
  // - .deepagents/agent.md (project-level, if in git repo)
});

// Agent can read and update its own memory
const result = await agent.generate({
  prompt: 'Remember that I prefer 2-space indentation',
});

Memory File Format (~/.deepagents/my-coding-assistant/agent.md):

# My Coding Assistant

## User Preferences
- Prefers 2-space indentation
- Likes comprehensive JSDoc comments

## Working Style
- Ask clarifying questions before implementing
- Consider edge cases and error handling

Pattern 7: Multi-Turn Conversations

Maintain conversation history across turns:

import { createDeepAgent, type ModelMessage } from 'ai-sdk-deep-agent';
import { anthropic } from '@ai-sdk/anthropic';

const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
});

let messages: ModelMessage[] = [];

// First turn
for await (const event of agent.streamWithEvents({
  messages: [{ role: 'user', content: 'Create a file called hello.txt' }],
})) {
  if (event.type === 'done') {
    messages = event.messages || [];
  }
}

// Second turn - agent remembers the file
for await (const event of agent.streamWithEvents({
  messages: [
    ...messages,
    { role: 'user', content: 'What file did you just create?' }
  ],
})) {
  if (event.type === 'text') {
    process.stdout.write(event.text);
  }
}

Troubleshooting

Next Steps

Explore Examples

Check out the /examples directory for working code:

basic.ts
streaming.ts
with-custom-tools.ts
with-checkpointer.ts
with-subagents.ts
web-research.ts
# Run any example
bun run examples/basic.ts

Try the CLI

Interactive CLI for quick prototyping:

# Start CLI
bunx ai-sdk-deep-agent

# Or with specific model
bunx ai-sdk-deep-agent --model anthropic/claude-sonnet-4-5-20250929

# CLI Commands
/help          # Show all commands
/todos         # Show todo list
/files         # List files
/read <path>   # Read a file
/clear         # Clear conversation
/model <name>  # Change model
/exit          # Exit CLI

Performance Tips

Optimize Your Agent

Follow these tips to improve performance and reduce costs
  1. Use Claude Haiku for faster responses:
const model = anthropic('claude-haiku-4-5-20251001');
  1. Enable prompt caching for Anthropic:
const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  enablePromptCaching: true,
});
  1. Set token limits for large tool results:
const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  toolResultEvictionLimit: 20000, // Evict results > 20k tokens
});
  1. Use structured output for type-safe responses:
import { createDeepAgent } from 'ai-sdk-deep-agent';
import { anthropic } from '@ai-sdk/anthropic';
import { z } from 'zod';

const agent = createDeepAgent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  output: {
    schema: z.object({
      summary: z.string(),
      keyPoints: z.array(z.string()),
    }),
  },
});

const result = await agent.generate({ prompt: '...' });
console.log(result.output?.summary); // Fully typed!

Getting Help

Summary

You now have everything you need to build your first Deep Agent:

  • ✅ Installed the package with Bun
  • ✅ Configured API keys
  • ✅ Created your first agent
  • ✅ Understood core concepts (todos, files, subagents)
  • ✅ Implemented common patterns
  • ✅ Troubleshooting common issues

Ready to build? Start with the basic example, then explore more advanced patterns as you need them. Happy building!

On this page