Sequential Recommendation (SR) plays a critical role in predicting users' sequential preferences. Despite its growing prominence in various industries, the increasing scale of SR models incurs substantial computational costs and unpredictability, challenging developers to manage resources efficiently. Under this predicament, Scaling Laws have achieved significant success by examining the loss as models scale up. However, there remains a disparity between loss and model performance, which is of greater concern in practical applications. Moreover, as data continues to expand, it incorporates repetitive and inefficient data. In response, we introduce the Performance Law for SR models, which aims to theoretically investigate and model the relationship between model performance and data quality. Specifically, we first fit the HR and NDCG metrics to transformer-based SR models. Subsequently, we propose Approximate Entropy (ApEn) to assess data quality, presenting a more nuanced approach compared to traditional data quantity metrics. Our method enables accurate predictions across various dataset scales and model sizes, demonstrating a strong correlation in large SR models and offering insights into achieving optimal performance for any given model configuration.
翻译:序列推荐在预测用户顺序偏好方面发挥着关键作用。尽管其在各行业中的重要性日益增长,但序列推荐模型规模的不断扩大带来了巨大的计算成本和不可预测性,对开发者高效管理资源提出了挑战。在此困境下,缩放定律通过考察模型规模扩大时的损失取得了显著成功。然而,损失与模型性能之间仍存在差异,而这在实际应用中更受关注。此外,随着数据持续扩张,其中包含了重复和低效的数据。为此,我们提出了序列推荐模型的性能定律,旨在从理论上研究和建模模型性能与数据质量之间的关系。具体而言,我们首先将HR和NDCG指标拟合到基于Transformer的序列推荐模型上。随后,我们提出近似熵来评估数据质量,相较于传统的数据量指标,该方法提供了更精细的评估方式。我们的方法能够在不同数据集规模和模型大小下实现准确预测,证明了其在大型序列推荐模型中的强相关性,并为任何给定模型配置实现最优性能提供了见解。