Agentic AI assistants that autonomously perform multi-step tasks raise open questions for user experience: how should such systems communicate progress and reasoning during extended operations, especially in attention-critical contexts such as driving? We investigate feedback timing and verbosity from agentic LLM-based in-car assistants through a controlled, mixed-methods study (N=45) comparing planned steps and intermediate results feedback against silent operation with final-only response. Using a dual-task paradigm with an in-car voice assistant, we found that intermediate feedback significantly improved perceived speed, trust, and user experience while reducing task load - effects that held across varying task complexities and interaction contexts. Interviews further revealed user preferences for an adaptive approach: high initial transparency to establish trust, followed by progressively reducing verbosity as systems prove reliable, with adjustments based on task stakes and situational context. We translate our empirical findings into design implications for feedback timing and verbosity in agentic assistants, balancing transparency and efficiency.
翻译:能够自主执行多步任务的代理型AI助手为用户体验带来了开放性问题:此类系统应如何在长时间操作过程中传达进展与推理,尤其是在驾驶等注意力密集型场景中?我们通过一项受控混合方法研究(N=45),比较了计划步骤与中间结果反馈相对于仅提供最终响应的静默操作模式,探究了基于代理型LLM的车载助手在反馈时机与信息密度方面的影响。采用车载语音助手的双任务范式实验发现,中间反馈显著提升了用户对速度的感知、信任度与用户体验,同时降低了任务负荷——这些效应在不同任务复杂度与交互情境中均保持稳定。访谈进一步揭示了用户对自适应方式的偏好:初期通过高透明度建立信任,随着系统可靠性的验证逐步降低信息密度,并根据任务风险与情境上下文进行动态调整。我们将实证发现转化为代理型助手在反馈时机与信息密度方面的设计启示,以平衡透明度与运行效率。