This technical report outlines the methodologies we applied for the PRCV Challenge, focusing on cognition and decision-making in driving scenarios. We employed InternVL-2.0, a pioneering open-source multi-modal model, and enhanced it by refining both the model input and training methodologies. For the input data, we strategically concatenated and formatted the multi-view images. It is worth mentioning that we utilized the coordinates of the original images without transformation. In terms of model training, we initially pre-trained the model on publicly available autonomous driving scenario datasets to bolster its alignment capabilities of the challenge tasks, followed by fine-tuning on the DriveLM-nuscenes Dataset. During the fine-tuning phase, we innovatively modified the loss function to enhance the model's precision in predicting coordinate values. These approaches ensure that our model possesses advanced cognitive and decision-making capabilities in driving scenarios. Consequently, our model achieved a score of 0.6064, securing the first prize on the competition's final results.
翻译:本技术报告阐述了我们在PRCV挑战赛中采用的方法,重点关注驾驶场景中的认知与决策。我们采用了前沿的开源多模态模型InternVL-2.0,并通过优化模型输入与训练方法对其进行了增强。在输入数据方面,我们对多视角图像进行了策略性的拼接与格式化处理。值得指出的是,我们直接使用了原始图像的坐标而未进行变换。在模型训练方面,我们首先在公开可用的自动驾驶场景数据集上对模型进行预训练,以增强其对挑战任务的适配能力,随后在DriveLM-nuscenes数据集上进行了微调。在微调阶段,我们创新性地改进了损失函数,以提升模型在预测坐标值时的精确度。这些方法确保了我们的模型在驾驶场景中具备先进的认知与决策能力。最终,我们的模型取得了0.6064的得分,在竞赛最终结果中荣获一等奖。