AI intent alignment, ensuring that AI produces outcomes as intended by users, is a critical challenge in human-AI interaction. The emergence of generative AI, including LLMs, has intensified the significance of this problem, as interactions increasingly involve users specifying desired results for AI systems. In order to support better AI intent alignment, we aim to explore human strategies for intent specification in human-human communication. By studying and comparing human-human and human-LLM communication, we identify key strategies that can be applied to the design of AI systems that are more effective at understanding and aligning with user intent. This study aims to advance toward a human-centered AI system by bringing together human communication strategies for the design of AI systems.
翻译:AI意图对齐,即确保人工智能产出的结果符合用户意图,是人机交互中的关键挑战。生成式AI(包括大语言模型)的出现加剧了这一问题的重要性,因为交互过程日益涉及用户为AI系统指定期望结果。为了支持更好的AI意图对齐,我们旨在探索人类在人际交流中指定意图的策略。通过研究并对比人际交流与人-大语言模型交流,我们识别出可应用于AI系统设计的关键策略,使其能更有效地理解并与用户意图对齐。本研究旨在通过融合人类交流策略用于AI系统设计,推动以人为中心的AI系统的发展。