AI Integration Expert

The AI Integration Expert agent specializes in AI/LLM integration, Anthropic Claude SDK, AI SDK, streaming responses, and building AI-powered features.

Expertise Areas#

  • Anthropic Claude Integration - SDK setup, message handling
  • Streaming Responses - Real-time AI output streaming
  • Vercel AI SDK - Simplified AI integration patterns
  • Structured Output - JSON extraction and parsing
  • Tool Use / Function Calling - AI-driven tool execution
  • Rate Limiting - API usage management
  • Prompt Engineering - Effective prompt design
  • Cost Optimization - Token management and efficiency

Usage Examples#

Claude Integration#

Use the ai-integration-expert agent to set up Anthropic Claude integration for a Next.js chat application.

Response includes:

  • SDK initialization
  • Message handling
  • System prompts
  • Error handling

Streaming Chat#

Use the ai-integration-expert agent to implement a streaming chat interface with real-time responses.

Response includes:

  • API route with streaming
  • Client-side stream handling
  • UI component structure
  • Loading states

Tool Use#

Use the ai-integration-expert agent to implement function calling for a customer service bot.

Response includes:

  • Tool definitions
  • Execution handlers
  • Response continuation
  • Error handling

Best Practices Applied#

1. API Integration#

  • Secure API key handling
  • Proper error management
  • Response type checking
  • Rate limiting implementation

2. Streaming#

  • Efficient stream handling
  • Progress indicators
  • Graceful error recovery
  • Connection management

3. Structured Output#

  • Zod schema validation
  • JSON parsing safety
  • Type inference
  • Fallback handling

4. Cost Management#

  • Token counting
  • Usage tracking
  • Rate limiting
  • Caching strategies

Common Patterns#

Anthropic Claude Setup#

1// lib/anthropic.ts 2import Anthropic from '@anthropic-ai/sdk'; 3 4export const anthropic = new Anthropic({ 5 apiKey: process.env.ANTHROPIC_API_KEY, 6}); 7 8export async function chat(prompt: string) { 9 const response = await anthropic.messages.create({ 10 model: 'claude-sonnet-4-20250514', 11 max_tokens: 1024, 12 messages: [{ role: 'user', content: prompt }], 13 }); 14 15 return response.content[0].type === 'text' 16 ? response.content[0].text 17 : null; 18}

Streaming API Route#

1// app/api/chat/route.ts 2import { anthropic } from '@/lib/anthropic'; 3 4export async function POST(req: Request) { 5 const { messages } = await req.json(); 6 7 const stream = await anthropic.messages.stream({ 8 model: 'claude-sonnet-4-20250514', 9 max_tokens: 1024, 10 messages, 11 }); 12 13 return new Response(stream.toReadableStream(), { 14 headers: { 15 'Content-Type': 'text/event-stream', 16 'Cache-Control': 'no-cache', 17 Connection: 'keep-alive', 18 }, 19 }); 20}

Vercel AI SDK Integration#

1// app/api/chat/route.ts 2import { anthropic } from '@ai-sdk/anthropic'; 3import { streamText } from 'ai'; 4 5export async function POST(req: Request) { 6 const { messages } = await req.json(); 7 8 const result = streamText({ 9 model: anthropic('claude-sonnet-4-20250514'), 10 messages, 11 }); 12 13 return result.toDataStreamResponse(); 14}

Tool Use Implementation#

1const tools = [ 2 { 3 name: 'get_weather', 4 description: 'Get the current weather for a location', 5 input_schema: { 6 type: 'object', 7 properties: { 8 location: { 9 type: 'string', 10 description: 'City and state, e.g., San Francisco, CA', 11 }, 12 }, 13 required: ['location'], 14 }, 15 }, 16]; 17 18export async function chatWithTools(prompt: string) { 19 const response = await anthropic.messages.create({ 20 model: 'claude-sonnet-4-20250514', 21 max_tokens: 1024, 22 tools, 23 messages: [{ role: 'user', content: prompt }], 24 }); 25 26 // Handle tool use in response 27 for (const block of response.content) { 28 if (block.type === 'tool_use') { 29 const result = await executeTool(block.name, block.input); 30 // Continue conversation with tool result 31 } 32 } 33 34 return response; 35}

Sample Prompts#

TaskPrompt
Chat setup"Set up a basic chat interface with Claude"
Streaming"Implement streaming responses in a chat application"
Tool use"Add function calling for database queries"
Rate limiting"Implement rate limiting for AI API calls"
Structured output"Extract structured data from user messages"

Configuration#

1// bootspring.config.js 2module.exports = { 3 agents: { 4 customInstructions: { 5 'ai-integration-expert': ` 6 - Use Anthropic Claude as primary LLM 7 - Implement streaming for better UX 8 - Include rate limiting 9 - Handle errors gracefully 10 - Track token usage for cost management 11 `, 12 }, 13 }, 14 ai: { 15 provider: 'anthropic', 16 model: 'claude-sonnet-4-20250514', 17 streaming: true, 18 }, 19};