Sequential recommendations (SR) with transformer-based architectures are widely adopted in real-world applications, where SR models require frequent retraining to adapt to ever-changing user preferences. However, training transformer-based SR models often encounters a high computational cost associated with scoring extensive item catalogs, often exceeding thousands of items. This occurs mainly due to the use of cross-entropy loss, where peak memory scales proportionally to catalog size, batch size, and sequence length. Recognizing this, practitioners in the field of recommendation systems typically address memory consumption by integrating the cross-entropy (CE) loss with negative sampling, thereby reducing the explicit memory demands of the final layer. However, a small number of negative samples would degrade model performance, and as we demonstrate in our work, increasing the number of negative samples and the batch size further improves the model's performance, but rapidly starts to exceed industrial GPUs' size (~40Gb). In this work, we introduce the CCE- method, which offers a GPU-efficient implementation of the CE loss with negative sampling. Our method accelerates training by up to two times while reducing memory consumption by more than 10 times. Leveraging the memory savings afforded by using CCE- for model training, it becomes feasible to enhance its accuracy on datasets with a large item catalog compared to those trained with original PyTorch-implemented loss functions. Finally, we perform an analysis of key memory-related hyperparameters and highlight the necessity of a delicate balance among these factors. We demonstrate that scaling both the number of negative samples and batch size leads to better results rather than maximizing only one of them. To facilitate further adoption of CCE-, we release a Triton kernel that efficiently implements the proposed method.
翻译:基于Transformer架构的序列推荐模型已在现实应用中广泛采用,此类模型需要频繁重新训练以适应不断变化的用户偏好。然而,基于Transformer的序列推荐模型训练常面临高昂的计算成本,这主要源于对庞大商品目录(通常包含数千个商品)进行评分的过程。该问题主要由交叉熵损失函数的使用导致,其峰值内存消耗与商品目录规模、批次大小及序列长度成正比。有鉴于此,推荐系统领域的实践者通常通过将交叉熵损失与负采样相结合来应对内存消耗问题,从而降低输出层的显式内存需求。然而,过少的负样本会损害模型性能,而我们的研究表明:增加负样本数量和批次大小虽能进一步提升模型性能,但会迅速超出工业级GPU的内存容量(约40Gb)。本研究提出CCE-方法,实现了支持负采样的交叉熵损失函数的GPU高效计算方案。该方法将训练速度提升至多两倍,同时减少超过十倍的内存消耗。借助CCE-方法节省的内存资源,相较于使用原始PyTorch实现的损失函数,在大型商品目录数据集上训练的模型精度得以提升。最后,我们对关键内存相关超参数进行分析,强调这些因素间需要精细平衡。实验证明,同时扩展负样本数量与批次大小能获得更优结果,而非仅最大化单一变量。为促进CCE-方法的广泛采用,我们开源了高效实现该方法的Triton计算内核。