Large Language Models (LLMs) have demonstrated impressive capabilities in generating coherent text but often struggle with grounding language and strategic dialogue. To address this gap, we focus on journalistic interviews, a domain rich in grounding communication and abundant in data. We curate a dataset of 40,000 two-person informational interviews from NPR and CNN, and reveal that LLMs are significantly less likely than human interviewers to use acknowledgements and to pivot to higher-level questions. Realizing that a fundamental deficit exists in multi-turn planning and strategic thinking, we develop a realistic simulated environment, incorporating source personas and persuasive elements, in order to facilitate the development of agents with longer-horizon rewards. Our experiments show that while source LLMs mimic human behavior in information sharing, interviewer LLMs struggle with recognizing when questions are answered and engaging persuasively, leading to suboptimal information extraction across model size and capability. These findings underscore the need for enhancing LLMs' strategic dialogue capabilities.
翻译:大语言模型(LLMs)在生成连贯文本方面展现出令人印象深刻的能力,但在语言基础与策略性对话方面仍常面临困难。为应对这一差距,我们聚焦于新闻访谈这一富含基础性交流且数据丰富的领域。我们整理了一个来自NPR和CNN的包含40,000场双人信息访谈的数据集,并揭示出LLMs相较于人类采访者,显著更少使用确认性回应以及转向更高层次的问题。认识到在多轮规划与策略性思维方面存在根本性缺陷,我们开发了一个结合信源角色与说服性元素的逼真模拟环境,以促进具有更长视野奖励的智能体开发。我们的实验表明,虽然信源LLMs在信息分享方面能够模仿人类行为,但采访者LLMs在识别问题何时得到解答以及进行有说服力的互动方面存在困难,导致在不同模型规模与能力下信息提取效果均未达最优。这些发现凸显了增强LLMs策略性对话能力的必要性。