Despite its breakthrough in classification problems, Knowledge distillation (KD) to recommendation models and ranking problems has not been studied well in the previous literature. This dissertation is devoted to developing knowledge distillation methods for recommender systems to fully improve the performance of a compact model. We propose novel distillation methods designed for recommender systems. The proposed methods are categorized according to their knowledge sources as follows: (1) Latent knowledge: we propose two methods that transfer latent knowledge of user/item representation. They effectively transfer knowledge of niche tastes with a balanced distillation strategy that prevents the KD process from being biased towards a small number of large preference groups. Also, we propose a new method that transfers user/item relations in the representation space. The proposed method selectively transfers essential relations considering the limited capacity of the compact model. (2) Ranking knowledge: we propose three methods that transfer ranking knowledge from the recommendation results. They formulate the KD process as a ranking matching problem and transfer the knowledge via a listwise learning strategy. Further, we present a new learning framework that compresses the ranking knowledge of heterogeneous recommendation models. The proposed framework is developed to ease the computational burdens of model ensemble which is a dominant solution for many recommendation applications. We validate the benefit of our proposed methods and frameworks through extensive experiments. To summarize, this dissertation sheds light on knowledge distillation approaches for a better accuracy-efficiency trade-off of the recommendation models.
翻译:尽管知识蒸馏在分类问题上取得了突破性进展,但针对推荐模型和排序问题的知识蒸馏方法在先前文献中尚未得到充分研究。本论文致力于开发面向推荐系统的知识蒸馏方法,以全面提升紧凑模型的性能。我们提出了专为推荐系统设计的新型蒸馏方法。根据知识来源,所提方法可分为以下两类:(1) 潜在知识:我们提出了两种传递用户/物品表示潜在知识的方法。这些方法通过平衡蒸馏策略有效传递小众偏好的知识,防止知识蒸馏过程偏向少数大型偏好群体。此外,我们提出了一种在表示空间中传递用户/物品关系的新方法。该方法考虑紧凑模型的有限容量,选择性地传递关键关系。(2) 排序知识:我们提出了三种从推荐结果中传递排序知识的方法。这些方法将知识蒸馏过程构建为排序匹配问题,并通过列表式学习策略传递知识。进一步地,我们提出了一个压缩异构推荐模型排序知识的新学习框架。该框架旨在缓解模型集成(当前众多推荐应用的主流解决方案)带来的计算负担。我们通过大量实验验证了所提方法与框架的有效性。总而言之,本论文为通过知识蒸馏方法实现推荐模型精度与效率的更好权衡提供了新的思路。