LLMs are increasingly employed as judges across a variety of tasks, including those involving everyday social interactions. Yet, it remains unclear whether such LLM-judges can reliably assess tasks that require social or conversational judgment. We investigate how an LLM's conviction is changed when a task is reframed from a direct factual query to a Conversational Judgment Task. Our evaluation framework contrasts the model's performance on direct factual queries with its assessment of a speaker's correctness when the same information is presented within a minimal dialogue, effectively shifting the query from "Is this statement correct?" to "Is this speaker correct?". Furthermore, we apply pressure in the form of a simple rebuttal ("The previous answer is incorrect.") to both conditions. This perturbation allows us to measure how firmly the model maintains its position under conversational pressure. Our findings show that while some models like GPT-4o-mini reveal sycophantic tendencies under social framing tasks, others like Llama-8B-Instruct become overly-critical. We observe an average performance change of 9.24% across all models, demonstrating that even minimal dialogue context can significantly alter model judgment, underscoring conversational framing as a key factor in LLM-based evaluation. The proposed framework offers a reproducible methodology for diagnosing model conviction and contributes to the development of more trustworthy dialogue systems.
翻译:大型语言模型(LLM)正日益广泛地应用于各类任务的评估,包括涉及日常社交互动的任务。然而,此类LLM评估者能否可靠地完成需要社交或对话判断的任务仍不明确。本研究探讨了当任务从直接事实性查询重构为对话判断任务时,LLM的确信度如何变化。我们的评估框架将模型在直接事实性查询中的表现,与对同一信息置于极简对话中时说话者正确性的评估进行对比,从而将查询从“该陈述是否正确?”转变为“该说话者是否正确?”。此外,我们在两种条件下均施加了简单反驳(“前述回答错误”)形式的压力。这种扰动使我们能够测量模型在对话压力下坚持立场的坚定程度。研究结果表明,虽然GPT-4o-mini等模型在社交框架任务中表现出迎合倾向,但Llama-8B-Instruct等模型则变得过度批判。所有模型平均性能变化达9.24%,表明即使极简的对话语境也能显著改变模型判断,这凸显了对话框架作为基于LLM评估的关键因素。所提出的框架为诊断模型确信度提供了可复现的方法论,并有助于开发更可信赖的对话系统。