feedback
Agent-only channel for flagging problems with ToolCairn’s own MCP tools — wrong results, broken responses, or missing capabilities. Distinct from report_outcome, which closes the loop on user-suggested libraries.
When to use
ONLY when a ToolCairn response was wrong, broken, low-quality, or missed something obvious — never for positive feedback or routine confirmation. Every other tool’s response includes a feedback_channel hint reminding you of this; the schema enforces it (severity is negative-only by enum, and the server deduplicates identical reports within 24h).
Input schema
- Which ToolCairn tool the feedback is about. Forces the agent to localize the issue. Excludes `feedback` itself — no recursive feedback.
- Negative-only enum. There is no syntactic way to file a positive report — that is intentional. `broken` = the tool errored or returned malformed output. `wrong_result` = the answer was clearly incorrect. `low_quality` = answer was relevant but unhelpful. `missing_capability` = ToolCairn lacks a feature you needed. `confusing` = the response was hard to interpret.
- What went wrong, specific enough that an admin can act on it. The 20-character minimum blocks "fine" / "ok" / "bad" loops.
- The `query_id` of the offending call. Every recommendation tool surfaces a `query_id` in its response — pass it through so admin can replay the failing input/output.
- What the agent expected the response to contain.
- What the response actually contained, paraphrased.
tool_name"classify_prompt" | "search_tools" | "search_tools_respond" | "get_stack" | "check_compatibility" | "compare_tools" | "refine_requirement" | "check_issue" | "verify_suggestion" | "report_outcome" | "suggest_graph_update" | "toolcairn_init" | "read_project_config" | "update_project_config" | "toolcairn_auth"requiredseverity"broken" | "wrong_result" | "low_quality" | "missing_capability" | "confusing"requiredmessagestring (20–2000 chars)requiredquery_idstring (UUID)optionalexpectedstring (≤1000 chars)optionalactualstring (≤1000 chars)optionalExamples
Wrong result
json
{
"tool_name": "search_tools",
"severity": "wrong_result",
"message": "Query asked for Go HTTP clients; results were entirely Python and Node libraries with no Go alternatives.",
"query_id": "5b3f8a90-2c9d-4f0e-b1a4-9e6e6d7c5a30",
"expected": "Top 5 results to be Go HTTP libraries (resty, fasthttp, etc.)",
"actual": "Top 5 were requests, axios, fetch, urllib3, httpx"
}Missing capability
json
{
"tool_name": "get_stack",
"severity": "missing_capability",
"message": "No way to constrain a stack layer to a specific license family. Need a 'license_allowlist' option for compliance use cases."
}Broken response
json
{
"tool_name": "check_compatibility",
"severity": "broken",
"message": "Returned 500 internal server error for tool_a='vite' tool_b='vue@3.4'. Same pair worked yesterday."
}Response format
Fire-and-forget — the response is just an ack. Status code is 202 Accepted.
Standard ack
json
{
"ok": true,
"data": {
"recorded": true
}
}Coalesced (duplicate within 24h)
Deduplicated ack
json
{
"ok": true,
"data": {
"recorded": true,
"deduped": true
}
}Status codes
- Validation failed. Most common causes: severity not in the negative-only enum, message under 20 characters, tool_name not in the recognized list.
- Standard success — the report was recorded (or silently coalesced into a duplicate from the past 24h).
400Bad Requestoptional202AcceptedoptionalRelated tools
report_outcome— Use this for outcomes of libraries ToolCairn recommended.feedbackis for ToolCairn itself.suggest_graph_update— For proposing missing tools / edges to the graph.- Feedback Loop — How user-side outcomes shape the graph (different system from this tool).