The evaluation of new algorithms in recommender systems frequently depends on publicly available datasets, such as those from MovieLens or Amazon. Some of these datasets are being disproportionately utilized primarily due to their historical popularity as baselines rather than their suitability for specific research contexts. This thesis addresses this issue by introducing the Algorithm Performance Space, a novel framework designed to differentiate datasets based on the measured performance of algorithms applied to them. An experimental study proposes three metrics to quantify and justify dataset selection to evaluate new algorithms. These metrics also validate assumptions about datasets, such as the similarity between MovieLens datasets of varying sizes. By creating an Algorithm Performance Space and using the proposed metrics, differentiating datasets was made possible, and diverse dataset selections could be found. While the results demonstrate the framework's potential, further research proposals and implications are discussed to develop Algorithm Performance Spaces tailored to diverse use cases.
翻译:推荐系统中新算法的评估通常依赖于公开可用的数据集,例如来自MovieLens或亚马逊的数据集。其中某些数据集的使用比例失衡,主要源于其作为基准的历史流行度,而非其对特定研究场景的适用性。本论文通过提出算法性能空间这一新颖框架来解决此问题,该框架旨在根据算法在数据集上测得的性能表现来区分不同数据集。一项实验研究提出了三个量化指标,用于量化为评估新算法而进行数据集选择的合理性。这些指标还能验证关于数据集的假设,例如不同规模的MovieLens数据集之间的相似性。通过构建算法性能空间并运用所提出的指标,成功实现了数据集的差异化区分,并能够发现多样化的数据集选择方案。尽管结果证明了该框架的潜力,本文还讨论了进一步的研究建议与启示,以开发适用于不同应用场景的定制化算法性能空间。