Large Language Models (LLMs) have succeeded considerably in In-Context-Learning (ICL) based summarization. However, saliency is subject to the users' specific preference histories. Hence, we need reliable In-Context Personalization Learning (ICPL) capabilities within such LLMs. For any arbitrary LLM to exhibit ICPL, it needs to have the ability to discern contrast in user profiles. A recent study proposed a measure for degree-of-personalization called EGISES for the first time. EGISES measures a model's responsiveness to user profile differences. However, it cannot test if a model utilizes all three types of cues provided in ICPL prompts: (i) example summaries, (ii) user's reading histories, and (iii) contrast in user profiles. To address this, we propose the iCOPERNICUS framework, a novel In-COntext PERsonalization learNIng sCrUtiny of Summarization capability in LLMs that uses EGISES as a comparative measure. As a case-study, we evaluate 17 state-of-the-art LLMs based on their reported ICL performances and observe that 15 models' ICPL degrades (min: 1.6%; max: 3.6%) when probed with richer prompts, thereby showing lack of true ICPL.
翻译:大型语言模型(LLMs)在基于上下文学习(ICL)的摘要生成任务中取得了显著成功。然而,摘要的显著性取决于用户特定的偏好历史。因此,我们需要此类LLMs具备可靠的上下文个性化学习(ICPL)能力。任意LLM要展现ICPL能力,必须能够辨别用户画像之间的对比差异。近期研究首次提出了一种称为EGISES的个性化程度度量指标,用于衡量模型对用户画像差异的响应能力。但该指标无法检测模型是否充分利用了ICPL提示中提供的三类线索:(i)示例摘要,(ii)用户阅读历史,以及(iii)用户画像对比。为此,我们提出iCOPERNICUS框架——一种基于EGISES作为对比度量指标的新型上下文个性化学习能力审查框架,用于评估LLMs的摘要生成能力。作为案例研究,我们基于已报告的ICL性能评估了17个前沿LLMs,发现其中15个模型在使用更丰富提示进行探测时,其ICPL性能出现下降(最小值:1.6%;最大值:3.6%),这表明这些模型缺乏真正的ICPL能力。