Diffusion-based recommender systems have recently proven to outperform traditional generative recommendation approaches, such as variational autoencoders and generative adversarial networks. Nevertheless, the machine learning literature has raised several concerns regarding the possibility that diffusion models, while learning the distribution of data samples, may inadvertently carry information bias and lead to unfair outcomes. In light of this aspect, and considering the relevance that fairness has held in recommendations over the last few decades, we conduct one of the first fairness investigations in the literature on DiffRec, a pioneer approach in diffusion-based recommendation. First, we propose an experimental setting involving DiffRec (and its variant L-DiffRec) along with nine state-of-the-art recommendation models, two popular recommendation datasets from the fairness-aware literature, and six metrics accounting for accuracy and consumer/provider fairness. Then, we perform a twofold analysis, one assessing models' performance under accuracy and recommendation fairness separately, and the other identifying if and to what extent such metrics can strike a performance trade-off. Experimental results from both studies confirm the initial unfairness warnings but pave the way for how to address them in future research directions.
翻译:基于扩散的推荐系统近期已被证明优于传统生成式推荐方法,如变分自编码器和生成对抗网络。然而,机器学习文献指出,扩散模型在学习数据样本分布时,可能无意中携带信息偏见并导致不公平结果,这一潜在风险引发了诸多关注。鉴于这一方面,并考虑到公平性在过去几十年推荐系统中的重要性,我们对扩散推荐领域的先驱方法DiffRec进行了文献中首批公平性研究之一。首先,我们提出了一个实验框架,涵盖DiffRec(及其变体L-DiffRec)与九种最先进的推荐模型,采用公平性研究领域两个常用推荐数据集,并引入六项衡量准确性及消费者/提供者公平性的指标。随后,我们进行了双重分析:一方面分别评估模型在准确性与推荐公平性方面的表现,另一方面探究这些指标是否及在何种程度上能够达成性能权衡。两项研究的实验结果均证实了最初关于不公平性的警示,同时为未来研究方向中如何解决这些问题指明了路径。