Despite large language models (LLMs) increasingly becoming important components of news recommender systems, employing LLMs in such systems introduces new risks, such as the influence of cognitive biases in LLMs. Cognitive biases refer to systematic patterns of deviation from norms or rationality in the judgment process, which can result in inaccurate outputs from LLMs, thus threatening the reliability of news recommender systems. Specifically, LLM-based news recommender systems affected by cognitive biases could lead to the propagation of misinformation, reinforcement of stereotypes, and the formation of echo chambers. In this paper, we explore the potential impact of multiple cognitive biases on LLM-based news recommender systems, including anchoring bias, framing bias, status quo bias and group attribution bias. Furthermore, to facilitate future research at improving the reliability of LLM-based news recommender systems, we discuss strategies to mitigate these biases through data augmentation, prompt engineering and learning algorithms aspects.
翻译:尽管大型语言模型(LLMs)日益成为新闻推荐系统的重要组成部分,但在此类系统中使用LLMs会引入新的风险,例如LLMs中认知偏见的影响。认知偏见是指判断过程中偏离规范或理性的系统性模式,可能导致LLMs产生不准确的输出,从而威胁新闻推荐系统的可靠性。具体而言,受认知偏见影响的基于LLM的新闻推荐系统可能导致错误信息的传播、刻板印象的强化以及回音室效应的形成。本文探讨了多种认知偏见对基于LLM的新闻推荐系统的潜在影响,包括锚定偏见、框架偏见、现状偏见和群体归因偏见。此外,为促进未来提升基于LLM的新闻推荐系统可靠性的研究,我们讨论了通过数据增强、提示工程和学习算法等方面减轻这些偏见的策略。