Semantic understanding of popularity bias is a crucial yet underexplored challenge in recommender systems, where popular items are often favored at the expense of niche content. Most existing debiasing methods treat the semantic understanding of popularity bias as a matter of diversity enhancement or long-tail coverage, neglecting the deeper semantic layer that embodies the causal origins of the bias itself. Consequently, such shallow interpretations limit both their debiasing effectiveness and recommendation accuracy. In this paper, we propose FairLRM, a novel framework that bridges the gap in the semantic understanding of popularity bias with Recommendation via Large Language Model (RecLLM). FairLRM decomposes popularity bias into item-side and user-side components, using structured instruction-based prompts to enhance the model's comprehension of both global item distributions and individual user preferences. Unlike traditional methods that rely on surface-level features such as "diversity" or "debiasing", FairLRM improves the model's ability to semantically interpret and address the underlying bias. Through empirical evaluation, we show that FairLRM significantly enhances both fairness and recommendation accuracy, providing a more semantically aware and trustworthy approach to enhance the semantic understanding of popularity bias. The implementation is available at https://github.com/LuoRenqiang/FairLRM.
翻译:在推荐系统中,流行度偏差的语义理解是一个关键但尚未充分探索的挑战,其中流行项目常以牺牲小众内容为代价而受到青睐。现有的大多数去偏差方法将流行度偏差的语义理解视为多样性增强或长尾覆盖问题,忽略了体现偏差本身因果根源的更深层语义层面。因此,这种浅层解释既限制了其去偏差效果,也制约了推荐准确性。本文提出FairLRM,一种通过基于大型语言模型的推荐(RecLLM)来桥接流行度偏差语义理解差距的新颖框架。FairLRM将流行度偏差分解为项目侧和用户侧两个组成部分,利用基于结构化指令的提示来增强模型对全局项目分布和个体用户偏好的理解。与依赖“多样性”或“去偏差”等表层特征的传统方法不同,FairLRM提升了模型在语义层面解释和处理潜在偏差的能力。通过实证评估,我们证明FairLRM能显著提升公平性和推荐准确性,为增强流行度偏差的语义理解提供了一种更具语义感知性和可信度的方法。实现代码发布于https://github.com/LuoRenqiang/FairLRM。