Rich Context Guide
Clarification rounds add latency. If your agent already knows the project’s language, framework, and constraints, skip them entirely by providing context filters upfront.
What you’ll learn
- Why clarification happens and when to skip it
- All available context filters and what they do
- How to go from a vague query to instant results
- Best practices for agents that auto-detect project context
The problem: unnecessary round-trips
ToolPilot’s Guided Discovery system asks follow-up questions when a query is ambiguous. That’s great for interactive use — but in automated pipelines or when an AI agent already has full project context, those extra round-trips add latency without adding value.
A bare query like this will almost always trigger clarification:
{
"query": "database"
}ToolPilot needs to know: SQL or NoSQL? What language? Self-hosted or cloud? What’s the scale? Without answers, it can’t rank meaningfully. The solution is to provide that context upfront.
The solution: context filters
Add a context object to your search_tools call. This gives ToolPilot the signal it needs to skip clarification and go straight to results.
{
"query": "database",
"context": {
"language": "Python",
"category": "orm",
"deployment": "self-hosted"
}
}With those three filters, ToolPilot knows exactly what you mean: a self-hosted ORM for Python. No follow-up needed — you get results on the first call.
Side by side: without vs. with context
Available context filters
You can provide any combination of these filters. Each one narrows the search and reduces the likelihood of clarification.
All filters are optional. Providing even one significantly reduces clarification frequency.
When to use which approach
✅ Use rich context when…
- Your agent already analyzed the codebase
- You know the language, framework, and deployment target
- Speed matters — CI/CD pipelines, automated workflows
- The query is specific enough that follow-ups waste time
💬 Let clarification happen when…
- The user is exploring — they’re not sure what they need
- The query is intentionally broad (“what testing tools exist?”)
- You want ToolPilot to surface options the user hadn’t considered
- Interactive latency is acceptable
Smart agent pattern: auto-detect context
The most effective agents don’t wait for the user to provide context — they extract it from the codebase automatically. Here’s the pattern:
1import { readFile } from 'fs/promises';
2
3async function buildSearchContext(projectRoot: string) {
4 const context: Record<string, string> = {};
5
6 // Detect language from package.json or pyproject.toml
7 try {
8 const pkg = JSON.parse(await readFile(`${projectRoot}/package.json`, 'utf-8'));
9 context.language = pkg.devDependencies?.typescript ? 'TypeScript' : 'JavaScript';
10 } catch {
11 try {
12 await readFile(`${projectRoot}/pyproject.toml`, 'utf-8');
13 context.language = 'Python';
14 } catch {
15 // Language unknown — let clarification handle it
16 }
17 }
18
19 // Detect deployment from Docker or cloud configs
20 try {
21 await readFile(`${projectRoot}/Dockerfile`, 'utf-8');
22 context.deployment = 'self-hosted';
23 } catch {
24 // No Dockerfile — might be cloud or embedded
25 }
26
27 return context;
28}
Smart agents analyze the project first
package.json, requirements.txt, or go.mod to automatically provide language and framework context. This turns every search into a zero-clarification lookup.Recap
Rich context is a power-user feature. By adding a context object with language, category, license, or deployment filters to your search_tools call, you skip clarification entirely and get instant results. For the best experience, build agents that auto-detect project context from the codebase.