Agentic AI assistants that autonomously perform multi-step tasks raise open questions for user experience: how should such systems communicate progress and reasoning during extended operations, especially in attention-critical contexts such as driving? We investigate feedback timing and verbosity from agentic LLM-based in-car assistants through a controlled, mixed-methods study (N=45) comparing planned steps and intermediate results feedback against silent operation with final-only response. Using a dual-task paradigm with an in-car voice assistant, we found that intermediate feedback significantly improved perceived speed, trust, and user experience while reducing task load - effects that held across varying task complexities and interaction contexts. Interviews further revealed user preferences for an adaptive approach: high initial transparency to establish trust, followed by progressively reducing verbosity as systems prove reliable, with adjustments based on task stakes and situational context. We translate our empirical findings into design implications for feedback timing and verbosity in agentic assistants, balancing transparency and efficiency.
翻译:能够自主执行多步任务的代理型AI助手为用户体验带来了开放性问题:此类系统在长时间操作过程中应如何传达进展与推理逻辑,尤其是在驾驶等注意力密集型场景中?我们通过一项受控混合方法研究(N=45),对比了分步计划与中间结果反馈相对于静默运行仅提供最终结果的模式,探究了基于代理型LLM的车载助手在反馈时机与信息密度方面的设计。采用车载语音助手的双任务范式实验发现,中间反馈能显著提升用户对速度的感知、信任度及整体体验,同时降低任务负荷——这些效果在不同任务复杂度与交互情境中均保持稳定。访谈进一步揭示了用户对自适应方式的偏好:初始阶段需要高透明度以建立信任,随着系统可靠性的验证逐步降低信息密度,并根据任务风险与情境动态调整。我们将实证发现转化为代理型助手中反馈时机与信息密度的设计启示,在透明度与运行效率间实现平衡。