@tour-kit/ai
CAG Guide
Context-Augmented Generation guide: inject tour documentation directly into AI prompts for fast, deterministic responses
domidex01Published
CAG is the simplest way to make your AI assistant tour-aware. It works by stuffing relevant tour context directly into the system prompt sent to the LLM.
When to Use CAG
- Your tour has fewer than ~20 steps
- You want a quick setup with no infrastructure
- Your context fits within the model's context window
Setup
Client Configuration
Enable tourContext in your provider config so tour state is sent with each request:
import { AiChatProvider } from '@tour-kit/ai'
function App() {
return (
<AiChatProvider
config={{
endpoint: '/api/chat',
tourContext: true,
}}
>
<YourApp />
</AiChatProvider>
)
}Server Configuration
Use createSystemPrompt to build a context-rich system prompt:
import { createChatRouteHandler } from '@tour-kit/ai/server'
import { openai } from '@ai-sdk/openai'
const { POST } = createChatRouteHandler({
model: openai('gpt-4o-mini'),
context: {
strategy: 'context-stuffing',
documents: [
{ id: 'guide', content: 'Step 1: Click the button...' },
],
},
instructions: {
productName: 'My App',
tone: 'friendly',
boundaries: ['Only answer questions about onboarding'],
},
})
export { POST }How It Works
- The client collects tour context (steps, progress, current step)
- Context is serialized and sent with each chat request
- The server injects this context into the system prompt
- The LLM receives the full tour context and can answer questions about it
Limitations
- Context size grows linearly with tour complexity
- Not suitable for large documentation sets (use RAG instead)
- Each request sends the full context, increasing token usage