Large Language Model (LLM)-based recommendation systems provide more comprehensive recommendations than traditional systems by deeply analyzing content and user behavior. However, these systems often exhibit biases, favoring mainstream content while marginalizing non-traditional options due to skewed training data. This study investigates the intricate relationship between bias and LLM-based recommendation systems, with a focus on music, song, and book recommendations across diverse demographic and cultural groups. Through a comprehensive analysis conducted over different LLM-models, this paper evaluates the impact of bias on recommendation outcomes. Our findings highlight that biases are not only deeply embedded but also widely pervasive across these systems, emphasizing the substantial and widespread nature of the issue. Moreover, contextual information, such as socioeconomic status, further amplify these biases, demonstrating the complexity and depth of the challenges faced in creating fair recommendations across different groups.
翻译:基于大型语言模型(LLM)的推荐系统通过对内容和用户行为进行深度分析,提供了比传统系统更全面的推荐。然而,由于训练数据的偏差,这些系统常常表现出偏见,倾向于推荐主流内容,同时边缘化非传统选项。本研究探讨了偏见与基于LLM的推荐系统之间复杂的关系,重点关注音乐、歌曲和书籍推荐在不同人口统计和文化群体中的表现。通过对不同LLM模型进行的综合分析,本文评估了偏见对推荐结果的影响。我们的研究结果表明,偏见不仅根深蒂固,而且在这些系统中广泛存在,凸显了该问题的严重性和普遍性。此外,情境信息(如社会经济地位)会进一步放大这些偏见,这表明在不同群体间创建公平推荐所面临的挑战具有复杂性和深度。