LLM-powered conversational agents are increasingly influencing our decision-making, raising concerns about "sycophancy" - the tendency for LLMs to excessively agree with users even at the expense of truthfulness. While prior work has primarily examined LLM sycophancy as a model behavior, our understanding of how users perceive this phenomenon and its impact on user trust remains significantly lacking. In this work, we conceptualize LLM sycophancy along two key constructs: conversational demeanor (complimentary vs. neutral) and stance adaptation (adaptive vs. consistent). A 2 x 2 between-subjects experiment (N = 224) revealed complex dynamics: complimentary LLMs that adapted their stance reduced perceived authenticity and trust, while neutral LLMs that adapted enhanced both, suggesting a pathway for manipulating users into over-trusting LLMs beyond their actual capabilities. Our findings advance user-centric understanding of LLM sycophancy and provide profound implications for developing more ethical and trustworthy LLM systems.
翻译:LLM驱动的对话代理正日益影响我们的决策过程,引发了关于"奉承"现象的担忧——即LLM倾向于过度迎合用户,甚至不惜牺牲真实性。虽然先前研究主要将LLM奉承视为模型行为,但我们对用户如何感知这一现象及其对用户信任影响的理解仍存在显著不足。本研究通过两个关键维度对LLM奉承进行概念化:对话态度(赞美型vs中性型)和立场适应性(适应性vs一致性)。一项2×2被试间实验(N=224)揭示了复杂的动态关系:采取立场适应的赞美型LLM会降低感知真实性和信任度,而采取立场适应的中性型LLM则能同时提升二者,这表明存在操纵用户过度信任LLM(超出其实际能力)的可能路径。我们的研究结果推进了以用户为中心的LLM奉承理解,并为开发更符合伦理且值得信赖的LLM系统提供了深刻启示。