Recent advances in vision-language models (VLM) have demonstrated remarkable capability in image classification. These VLMs leverage a predefined set of categories to construct text prompts for zero-shot reasoning. However, in more open-ended domains like autonomous driving, using a predefined set of labels becomes impractical, as the semantic label space is unknown and constantly evolving. Additionally, fixed embedding text prompts often tend to predict a single label (while in reality, multiple labels commonly exist per image). In this paper, we introduce CoA, an innovative Chain-of-Action (CoA) method that generates labels aligned with all contextually relevant features of an image. CoA is designed based on the observation that enriched and valuable contextual information improves generative performance during inference. Traditional vision-language models tend to output singular and redundant responses. Therefore, we employ a tailored CoA to alleviate this problem. We first break down the generative labeling task into detailed actions and construct an CoA leading to the final generative objective. Each action extracts and merges key information from the previous action and passes the enriched information as context to the next action, ultimately improving the VLM in generating comprehensive and accurate semantic labels. We assess the effectiveness of CoA through comprehensive evaluations on widely-used benchmark datasets and the results demonstrate significant improvements across key performance metrics.
翻译:近年来,视觉语言模型在图像分类任务中展现出卓越的性能。这些模型通常利用预定义的类别集合来构建文本提示,以进行零样本推理。然而,在自动驾驶等更为开放的应用领域中,由于语义标签空间未知且持续演化,使用预定义的标签集变得不切实际。此外,基于固定嵌入的文本提示往往倾向于预测单一标签(而实际场景中每幅图像通常包含多个标签)。本文提出CoA,一种创新的行动链方法,能够生成与图像中所有上下文相关特征对齐的标签。CoA的设计基于以下观察:在推理过程中,丰富且有价值的上下文信息能够提升生成性能。传统视觉语言模型往往输出单一且冗余的响应,因此我们采用定制化的行动链来缓解该问题。我们首先将生成式标注任务分解为细粒度的行动步骤,并构建导向最终生成目标的行动链。每个行动从前序步骤中提取并融合关键信息,并将增强后的信息作为上下文传递给后续行动,从而提升视觉语言模型生成全面且准确语义标签的能力。我们在广泛使用的基准数据集上进行了综合评估,结果表明该方法在关键性能指标上均取得了显著提升。