Semantic understanding of popularity bias is a crucial yet underexplored challenge in recommender systems, where popular items are often favored at the expense of niche content. Most existing debiasing methods treat the semantic understanding of popularity bias as a matter of diversity enhancement or long-tail coverage, neglecting the deeper semantic layer that embodies the causal origins of the bias itself. Consequently, such shallow interpretations limit both their debiasing effectiveness and recommendation accuracy. In this paper, we propose FairLRM, a novel framework that bridges the gap in the semantic understanding of popularity bias with Recommendation via Large Language Model (RecLLM). FairLRM decomposes popularity bias into item-side and user-side components, using structured instruction-based prompts to enhance the model's comprehension of both global item distributions and individual user preferences. Unlike traditional methods that rely on surface-level features such as "diversity" or "debiasing", FairLRM improves the model's ability to semantically interpret and address the underlying bias. Through empirical evaluation, we show that FairLRM significantly enhances both fairness and recommendation accuracy, providing a more semantically aware and trustworthy approach to enhance the semantic understanding of popularity bias. The implementation is available at https://github.com/LuoRenqiang/FairLRM.
翻译:推荐系统中流行度偏差的语义理解是一个关键但尚未充分探索的挑战,其中流行项目常以牺牲小众内容为代价获得青睐。现有的大多数去偏方法将流行度偏差的语义理解视为多样性增强或长尾覆盖问题,忽略了体现偏差本身因果根源的深层语义层面。因此,这种浅层解释既限制了其去偏效果,也制约了推荐准确性。本文提出FairLRM——一种通过基于大语言模型的推荐(RecLLM)桥接流行度偏差语义理解鸿沟的新型框架。FairLRM将流行度偏差分解为项目侧与用户侧组件,利用基于结构化指令的提示来增强模型对全局项目分布与个体用户偏好的理解。与传统依赖"多样性"或"去偏"等表层特征的方法不同,FairLRM提升了模型从语义层面解释并处理潜在偏差的能力。实证评估表明,FairLRM在公平性与推荐准确性方面均取得显著提升,为增强流行度偏差的语义理解提供了更具语义感知力与可信度的解决方案。项目代码已开源:https://github.com/LuoRenqiang/FairLRM。