Sequential recommendations (SR) with transformer-based architectures are widely adopted in real-world applications, where SR models require frequent retraining to adapt to ever-changing user preferences. However, training transformer-based SR models often encounters a high computational cost associated with scoring extensive item catalogs, often exceeding thousands of items. This occurs mainly due to the use of cross-entropy loss, where peak memory scales proportionally to catalog size, batch size, and sequence length. Recognizing this, practitioners in the field of recommendation systems typically address memory consumption by integrating the cross-entropy (CE) loss with negative sampling, thereby reducing the explicit memory demands of the final layer. However, a small number of negative samples would degrade model performance, and as we demonstrate in our work, increasing the number of negative samples and the batch size further improves the model's performance, but rapidly starts to exceed industrial GPUs' size (~40Gb). In this work, we introduce the CCE- method, which offers a GPU-efficient implementation of the CE loss with negative sampling. Our method accelerates training by up to two times while reducing memory consumption by more than 10 times. Leveraging the memory savings afforded by using CCE- for model training, it becomes feasible to enhance its accuracy on datasets with a large item catalog compared to those trained with original PyTorch-implemented loss functions. Finally, we perform an analysis of key memory-related hyperparameters and highlight the necessity of a delicate balance among these factors. We demonstrate that scaling both the number of negative samples and batch size leads to better results rather than maximizing only one of them. To facilitate further adoption of CCE-, we release a Triton kernel that efficiently implements the proposed method.
翻译:基于Transformer架构的序列推荐模型在现实应用中已被广泛采用,此类模型需要频繁重新训练以适应不断变化的用户偏好。然而,训练基于Transformer的序列推荐模型通常面临与大规模商品目录(通常包含数千甚至更多商品)评分相关的高计算成本。这主要源于交叉熵损失函数的使用,其峰值内存消耗与商品目录规模、批次大小及序列长度成比例增长。认识到这一问题,推荐系统领域的实践者通常通过将交叉熵损失与负采样相结合来应对内存消耗,从而降低输出层的显式内存需求。然而,过少的负样本会降低模型性能,而正如我们在工作中所展示的,增加负样本数量和批次大小虽然能进一步提升模型性能,但会迅速超出工业级GPU的内存容量(约40GB)。在本研究中,我们提出了CCE-方法,它提供了一种GPU高效的带负采样的交叉熵损失实现方案。我们的方法将训练速度提升至多两倍,同时将内存消耗降低十倍以上。利用CCE-方法在模型训练中节省的内存资源,与使用原始PyTorch实现的损失函数相比,在具有大规模商品目录的数据集上提升模型精度变得可行。最后,我们对关键的内存相关超参数进行了分析,并强调了这些因素之间精细平衡的必要性。我们证明同时扩展负样本数量和批次大小能获得更好的结果,而非仅最大化其中单一因素。为促进CCE-方法的进一步应用,我们开源了一个高效实现该方法的Triton内核。