High-quality feedback is essential for effective human-AI interaction. It bridges knowledge gaps, corrects digressions, and shapes system behavior; both during interaction and throughout model development. Yet despite its importance, human feedback to AI is often infrequent and low quality. This gap motivates a critical examination of human feedback during interactions with AIs. To understand and overcome the challenges preventing users from giving high-quality feedback, we conducted two studies examining feedback dynamics between humans and conversational agents (CAs). Our formative study, through the lens of Grice's maxims, identified four Feedback Barriers -- Common Ground, Verifiability, Communication, and Informativeness -- that prevent high-quality feedback by users. Building on these findings, we derive three design desiderata and show that systems incorporating scaffolds aligned with these desiderata enabled users to provide higher-quality feedback. Finally, we detail a call for action to the broader AI community for advances in Large Language Models capabilities to overcome Feedback Barriers.
翻译:高质量反馈对于有效的人机交互至关重要。它能够弥合知识鸿沟、纠正话题偏离并塑造系统行为,无论是在交互过程中还是在模型开发的整个周期内皆然。然而,尽管反馈至关重要,人类对人工智能的反馈却常常频率低且质量差。这一差距促使我们对人机交互过程中的人类反馈进行批判性审视。为了理解并克服阻碍用户提供高质量反馈的挑战,我们进行了两项研究,考察人类与对话智能体之间的反馈动态。我们的形成性研究通过格赖斯会话准则的视角,识别出四种阻碍用户提供高质量反馈的障碍——共同基础、可验证性、沟通与信息量。基于这些发现,我们推导出三项设计原则,并证明整合了符合这些原则的支架的系统能够使用户提供更高质量的反馈。最后,我们详细呼吁更广泛的人工智能社区采取行动,推动大型语言模型能力的进步以克服反馈障碍。