Offline evaluations in recommender system research depend heavily on datasets, many of which are pruned, such as the widely used MovieLens collections. This thesis examines the impact of data pruning - specifically, removing users with fewer than a specified number of interactions - on both dataset characteristics and algorithm performance. Five benchmark datasets were analysed in both their unpruned form and at five successive pruning levels (5, 10, 20, 50, 100). For each coreset, we examined structural and distributional characteristics and trained and tested eleven representative algorithms. To further assess if pruned datasets lead to artificially inflated performance results, we also evaluated models trained on the pruned train sets but tested on unpruned data. Results show that commonly applied core pruning can be highly selective, leaving as little as 2% of the original users in some datasets. Traditional algorithms achieved higher nDCG@10 scores when both training and testing on pruned data; however, this advantage largely disappeared when evaluated on unpruned test sets. Across all algorithms, performance declined with increasing pruning levels when tested on unpruned data, highlighting the impact of dataset reduction on the performance of recommender algorithms.
翻译:推荐系统研究中的离线评估高度依赖于数据集,其中许多都经过剪枝处理,例如广泛使用的MovieLens系列数据集。本文研究了数据剪枝——具体指移除交互次数低于指定阈值的用户——对数据集特征和算法性能的影响。我们分析了五个基准数据集在未剪枝状态及五个连续剪枝层级(5、10、20、50、100)下的表现。针对每个核心子集,我们考察了其结构与分布特征,并训练和测试了十一种代表性算法。为进一步评估剪枝数据集是否会导致人为虚高的性能结果,我们还测试了在剪枝训练集上训练但在未剪枝数据上评估的模型。结果显示,常用的核心剪枝操作可能具有高度选择性,在某些数据集中仅保留原始用户的2%。传统算法在剪枝数据上进行训练和测试时获得了更高的nDCG@10分数;然而,当在未剪枝测试集上评估时,这一优势基本消失。所有算法在未剪枝数据上测试时,其性能均随剪枝层级的增加而下降,这凸显了数据集缩减对推荐算法性能的影响。