Claude Code / Desktop Quick Start
Get ToolPilot running with Claude in under 2 minutes. Your agent will be able to search 491+ developer tools, compare alternatives, and build stacks.
Prerequisites
- Claude Code (CLI) or Claude Desktop installed
- Node.js 18+ (for
npx)
Step 1: Add the MCP Server Config
Add the ToolPilot MCP server to your Claude configuration file:
claude_desktop_config.json
1{
2 "mcpServers": {
3 "toolpilot": {
4 "command": "npx",
5 "args": ["-y", "@anthropic/toolpilot-mcp"],
6 "env": {}
7 }
8 }
9}
Claude Code vs Claude Desktop
Claude Desktop: Edit the config at
Claude Code: Run
~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows).Claude Code: Run
claude mcp add toolpilot npx -y @anthropic/toolpilot-mcp from your terminal β it handles the config for you.Step 2: Try Your First Search
Restart Claude, then ask a natural-language question about developer tools. ToolPilot handles the rest via the MCP protocol:
Example conversation
You: "I need a fast, production-ready vector database for storing embeddings.
Must support filtering and be self-hostable."
Claude (using ToolPilot):
β Calls search_tools with your query
β ToolPilot asks a clarification question: "What scale are you targeting?"
β Claude answers via search_tools_respond
β Returns ranked results:
1. Qdrant β Health: Active β
β
β
β
β
(self-host, filtering, Rust-based)
2. Milvus β Health: Active β
β
β
β
β (distributed, GPU-accelerated)
3. Weaviate β Health: Active β
β
β
β
β (GraphQL API, modules)
4. Chroma β Health: Active β
β
β
ββ (simple, Python-native)Be specific
The more context you give (language, scale, constraints), the fewer clarification rounds ToolPilot needs and the better the results.
Step 3: Explore Results
Each tool recommendation includes:
- Health tier β Active, Stable, Slowing, or At Risk β based on commit activity, issues, and releases
- Graph context β Related tools, alternatives, and companion libraries
- Key metadata β Stars, license, language, last release date, and category tags
- Match rationale β Why the tool was selected for your specific query
Ask Claude follow-up questions like βCompare Qdrant and Milvusβ or βBuild me a full RAG stackβ to dig deeper.
Step 4: Report Outcomes
After you've tried a tool, let Claude know how it went. This feeds back into the graph and improves future recommendations:
Feedback example
You: "I went with Qdrant and it's working great for my use case."
Claude (using ToolPilot):
β Calls report_outcome with tool="qdrant", outcome="adopted"
β The graph learns from your feedbackWhy report outcomes?
Feedback adjusts edge weights in the tool graph. Over time, ToolPilot learns which tools work well together and which ones don't β making every search smarter.