Recent progress in motion forecasting has been substantially driven by self-supervised pre-training. However, adapting pre-trained models for specific downstream tasks, especially motion prediction, through extensive fine-tuning is often inefficient. This inefficiency arises because motion prediction closely aligns with the masked pre-training tasks, and traditional full fine-tuning methods fail to fully leverage this alignment. To address this, we introduce Forecast-PEFT, a fine-tuning strategy that freezes the majority of the model's parameters, focusing adjustments on newly introduced prompts and adapters. This approach not only preserves the pre-learned representations but also significantly reduces the number of parameters that need retraining, thereby enhancing efficiency. This tailored strategy, supplemented by our method's capability to efficiently adapt to different datasets, enhances model efficiency and ensures robust performance across datasets without the need for extensive retraining. Our experiments show that Forecast-PEFT outperforms traditional full fine-tuning methods in motion prediction tasks, achieving higher accuracy with only 17% of the trainable parameters typically required. Moreover, our comprehensive adaptation, Forecast-FT, further improves prediction performance, evidencing up to a 9.6% enhancement over conventional baseline methods. Code will be available at https://github.com/csjfwang/Forecast-PEFT.
翻译:近期运动预测领域的进展主要得益于自监督预训练。然而,针对特定下游任务(尤其是运动预测)通过大规模微调来适配预训练模型通常效率低下。这种低效性源于运动预测任务与掩码预训练任务的高度相似性,而传统的全参数微调方法未能充分利用这种关联性。为此,我们提出Forecast-PEFT——一种冻结模型大部分参数、仅对新引入的提示模块与适配器进行调整的微调策略。该方法不仅保留了预训练学习到的表征,还显著减少了需重新训练的参数量,从而提升了效率。这种定制化策略结合本方法高效适配不同数据集的能力,在增强模型效率的同时,确保了跨数据集的鲁棒性能,且无需进行大规模重新训练。实验表明,在运动预测任务中,Forecast-PEFT以仅需传统方法17%可训练参数量实现了优于传统全参数微调的精度。此外,我们提出的完整适配方案Forecast-FT进一步提升了预测性能,相较于传统基线方法最高可提升9.6%。代码将发布于https://github.com/csjfwang/Forecast-PEFT。