Large language models (LLMs) can be used to generate natural language explanations (NLE) that are adapted to different users' situations. However, there is yet to be a quantitative evaluation of the extent of such adaptation. To bridge this gap, we collect a benchmarking dataset, Situation-Based Explanation. This dataset contains 100 explanandums. Each explanandum is paired with explanations targeted at three distinct audience types-such as educators, students, and professionals-enabling us to assess how well the explanations meet the specific informational needs and contexts of these diverse groups e.g. students, teachers, and parents. For each "explanandum paired with an audience" situation, we include a human-written explanation. These allow us to compute scores that quantify how the LLMs adapt the explanations to the situations. On an array of pretrained language models with varying sizes, we examine three categories of prompting methods: rule-based prompting, meta-prompting, and in-context learning prompting. We find that 1) language models can generate prompts that result in explanations more precisely aligned with the target situations, 2) explicitly modeling an "assistant" persona by prompting "You are a helpful assistant..." is not a necessary prompt technique for situated NLE tasks, and 3) the in-context learning prompts only can help LLMs learn the demonstration template but can't improve their inference performance. SBE and our analysis facilitate future research towards generating situated natural language explanations.
翻译:大型语言模型(LLM)可用于生成适应不同用户情境的自然语言解释(NLE)。然而,目前尚缺乏对此类适应程度的定量评估。为填补这一空白,我们收集了一个基准数据集——基于情境的解释数据集。该数据集包含100个待解释对象,每个对象均针对三种不同的受众类型(例如教育工作者、学生和专业人士)配以相应的解释,从而能够评估这些解释在多大程度上满足了不同群体(如学生、教师和家长)的特定信息需求与情境。针对每个“待解释对象与受众配对”的情境,我们均提供了人工撰写的解释。基于此,我们可以计算量化LLM如何使解释适应具体情境的评分。在一系列不同规模的预训练语言模型上,我们检验了三种提示方法类别:基于规则的提示、元提示和上下文学习提示。研究发现:1)语言模型能够生成使解释更精准契合目标情境的提示;2)通过提示“你是一个乐于助人的助手……”来显式建模“助手”角色并非情境化NLE任务所必需的提示技术;3)上下文学习提示仅能帮助LLM学习演示模板,但无法提升其推理性能。SBE数据集及我们的分析为未来生成情境化自然语言解释的研究提供了支持。