Resource-efficient training optimization techniques are becoming increasingly important as the size of large language models (LLMs) continues to grow. In particular, batch packing is commonly used in pre-training and supervised fine-tuning to achieve resource-efficient training. We propose preference packing, a method to enhance resource efficiency in training techniques that use data with different responses for the same input prompt, such as reward models or Direct Preference Optimization (DPO). Preference packing improves resource efficiency by reducing the attention operations for duplicate input prompts and decreasing KV cache memory usage. We conducted experiments on text-only datasets and image-included datasets and achieved at least 37% reduction in training time. Notably, this method can be applied alongside existing optimization techniques such as batch sorting, resulting in a 3.22x speedup.
翻译:随着大型语言模型(LLMs)规模的持续增长,资源高效训练优化技术正变得日益重要。特别是在预训练和监督微调中,批处理打包技术常被用于实现资源高效训练。本文提出偏好打包方法,旨在提升那些使用同一输入提示对应不同响应数据(例如奖励模型或直接偏好优化)的训练技术的资源效率。偏好打包通过减少对重复输入提示的注意力计算操作并降低键值缓存内存使用,从而提升资源效率。我们在纯文本数据集和包含图像的数据集上进行了实验,实现了训练时间至少减少37%。值得注意的是,该方法可与现有优化技术(如批排序)结合使用,最终获得3.22倍的加速效果。