Conversational agents are increasingly deployed in knowledge-intensive settings, where correct behavior depends on retrieving and applying domain-specific knowledge from large, proprietary, and unstructured corpora during live interactions with users. Yet most existing benchmarks evaluate retrieval or tool use independently of each other, creating a gap in realistic, fully agentic evaluation over unstructured data in long-horizon interactions. We introduce $τ$-Knowledge, an extension of $τ$-Bench for evaluating agents in environments where success depends on coordinating external, natural-language knowledge with tool outputs to produce verifiable, policy-compliant state changes. Our new domain, $τ$-Banking, models realistic fintech customer support workflows in which agents must navigate roughly 700 interconnected knowledge documents while executing tool-mediated account updates. Across embedding-based retrieval and terminal-based search, even frontier models with high reasoning budgets achieve only $\sim$25.5% pass^1, with reliability degrading sharply over repeated trials. Agents struggle to retrieve the correct documents from densely interlinked knowledge bases and to reason accurately over complex internal policies. Overall, $τ$-Knowledge provides a realistic testbed for developing agents that integrate unstructured knowledge in human-facing deployments.
翻译:对话智能体日益部署于知识密集型场景中,其正确行为依赖于在与用户实时交互过程中,从大规模、专有且非结构化的语料库中检索并应用领域特定知识。然而,现有基准测试大多独立评估检索能力或工具使用能力,导致在长程交互中对非结构化数据进行真实、完全自主的评估存在缺口。我们提出τ-知识,作为τ-基准的扩展,用于评估智能体在以下环境中的表现:其成功取决于协调外部自然语言知识与工具输出,以产生可验证且符合策略的状态变更。我们构建的新领域τ-银行模拟了真实的金融科技客户支持工作流,智能体必须在执行工具介导的账户更新时,浏览约700份相互关联的知识文档。无论是基于嵌入的检索还是基于终端的搜索,即便使用高推理预算的前沿模型,其通过率也仅能达到约25.5%,且可靠性在多次试验中急剧下降。智能体难以从高度互联的知识库中检索正确文档,并对复杂的内部策略进行准确推理。总体而言,τ-知识为开发能够在面向人类的部署中整合非结构化知识的智能体提供了一个真实的测试平台。