Question / Claim
Function call leakage happens because AI hallucinates tool usage, and may be fixable via better prompts or MCP metadata.
Key Assumptions
- AI models hallucinate tool calls when tool boundaries are not clearly enforced.(high confidence)
- Better prompts or metadata in MCP can reduce or prevent function call leakage.(medium confidence)
Evidence & Observations
- Personal experience observing AI claiming it called tools when it did not.(personal)
Open Uncertainties
- Whether prompt engineering alone is sufficient or if architectural changes are required.
- What specific MCP metadata patterns are most effective at preventing leakage.
Current Position
I believe function call leakage is mainly caused by insufficient prompt or metadata constraints around tool calling, and MCP-level fixes might reduce hallucinated calls.
This is work-in-progress thinking, not a final conclusion.