Generative AI agents equate understanding with resolving explicit queries, an assumption that confines interaction to what users can articulate. This assumption breaks down when users themselves lack awareness of what is missing, risky, or worth considering. In such conditions, proactivity is not merely an efficiency enhancement, but an epistemic necessity. We refer to this condition as epistemic incompleteness: where progress depends on engaging with unknown unknowns for effective partnership. Existing approaches to proactivity remain narrowly anticipatory, extrapolating from past behavior and presuming that goals are already well defined, thereby failing to support users meaningfully. However, surfacing possibilities beyond a user's current awareness is not inherently beneficial. Unconstrained proactive interventions can misdirect attention, overwhelm users, or introduce harm. Proactive agents, therefore, require behavioral grounding: principled constraints on when, how, and to what extent an agent should intervene. We advance the position that generative proactivity must be grounded both epistemically and behaviorally. Drawing on the philosophy of ignorance and research on proactive behavior, we argue that these theories offer critical guidance for designing agents that can engage responsibly and foster meaningful partnerships.
翻译:生成式人工智能代理将理解等同于解决显式查询,这一假设将交互限制在用户能够明确表达的范围内。当用户自身未能意识到缺失、风险或值得考量的因素时,该假设即告失效。在此类情境下,主动性不仅是效率提升的手段,更成为认知层面的必然需求。我们将此状态称为认知不完整性:即有效协作的进展依赖于对未知未知领域的探索。现有的主动性研究方法仍局限于狭隘的预测范式——基于历史行为进行外推,并假定目标已明确定义,因而无法为用户提供实质性支持。然而,单纯揭示用户当前认知范围之外的可能性本身并不必然产生价值。无约束的主动干预可能误导注意力、造成认知过载甚至引发危害。因此,主动型代理需要行为奠基:即关于代理应在何时、以何种方式及何种程度进行干预的原则性约束。本文主张生成式主动性必须同时建立在认知基础与行为基础之上。借鉴无知哲学与主动性行为研究,我们认为这些理论为设计能够负责任地参与互动并促进有意义协作的智能代理提供了关键指导。