Detecting Human-Object Interactions (HOI) in zero-shot settings, where models must handle unseen classes, poses significant challenges. Existing methods that rely on aligning visual encoders with large Vision-Language Models (VLMs) to tap into the extensive knowledge of VLMs, require large, computationally expensive models and encounter training difficulties. Adapting VLMs with prompt learning offers an alternative to direct alignment. However, fine-tuning on task-specific datasets often leads to overfitting to seen classes and suboptimal performance on unseen classes, due to the absence of unseen class labels. To address these challenges, we introduce a novel prompt learning-based framework for Efficient Zero-Shot HOI detection (EZ-HOI). First, we introduce Large Language Model (LLM) and VLM guidance for learnable prompts, integrating detailed HOI descriptions and visual semantics to adapt VLMs to HOI tasks. However, because training datasets contain seen-class labels alone, fine-tuning VLMs on such datasets tends to optimize learnable prompts for seen classes instead of unseen ones. Therefore, we design prompt learning for unseen classes using information from related seen classes, with LLMs utilized to highlight the differences between unseen and related seen classes. Quantitative evaluations on benchmark datasets demonstrate that our EZ-HOI achieves state-of-the-art performance across various zero-shot settings with only 10.35% to 33.95% of the trainable parameters compared to existing methods. Code is available at https://github.com/ChelsieLei/EZ-HOI.
翻译:在零样本场景下检测人物-物体交互(HOI)——即模型必须处理未见过的类别——带来了重大挑战。现有方法依赖于将视觉编码器与大型视觉语言模型(VLM)对齐以利用其广泛知识,但需要庞大且计算成本高的模型,并遇到训练困难。通过提示学习来适应VLM提供了一种替代直接对齐的方案。然而,在特定任务数据集上进行微调,由于缺乏未见类别的标签,往往会导致对已见类别的过拟合以及对未见类别的次优性能。为解决这些挑战,我们提出了一种新颖的基于提示学习的高效零样本HOI检测框架(EZ-HOI)。首先,我们为可学习提示引入大型语言模型(LLM)和VLM引导,整合详细的HOI描述和视觉语义,使VLM适应HOI任务。然而,由于训练数据集仅包含已见类别的标签,在此类数据集上微调VLM倾向于优化针对已见类别的可学习提示,而非针对未见类别。因此,我们利用相关已见类别的信息,为未见类别设计提示学习,并利用LLM来强调未见类别与相关已见类别之间的差异。在基准数据集上的定量评估表明,我们的EZ-HOI在各种零样本设置下均实现了最先进的性能,且可训练参数量仅为现有方法的10.35%至33.95%。代码可在 https://github.com/ChelsieLei/EZ-HOI 获取。