OpenAI Integration Pattern
Integrate OpenAI GPT models into Next.js applications for chat completions, streaming responses, function calling, and embeddings.
What's Included#
- OpenAI client setup with environment-based API key configuration
- Basic and system-prompted chat completions
- Streaming responses with Vercel AI SDK integration
- Function calling for tool use and external system interaction
- Text embedding generation for semantic search
- Multi-turn conversation management
- Error handling with rate limit and authentication awareness
Usage#
Via CLI#
Loading code block...
Via AI Assistant#
Ask your AI assistant:
- "Use the OpenAI pattern from Bootspring"
- "Apply the Bootspring OpenAI pattern to my project"
Key Considerations#
- Never expose API keys in client-side code; make all API calls from server routes
- Implement rate limit handling with retry logic and user-friendly 429 error messages
- Set max_tokens limits on all requests to control costs and response length
- Stream long responses to improve perceived latency in chat interfaces
- Cache embeddings and repeated query results to reduce API usage and costs
Related Patterns#
- Embeddings - Vector embeddings for search
- Streaming - Real-time response streaming
- Function Calling - Tool use patterns
- RAG - Retrieval-augmented generation