Large-scale pre-trained models (PTMs) show great zero-shot capabilities. In this paper, we study how to leverage them for zero-shot visual question answering (VQA). Our approach is motivated by a few observations. First, VQA questions often require multiple steps of reasoning, which is still a capability that most PTMs lack. Second, different steps in VQA reasoning chains require different skills such as object detection and relational reasoning, but a single PTM may not possess all these skills. Third, recent work on zero-shot VQA does not explicitly consider multi-step reasoning chains, which makes them less interpretable compared with a decomposition-based approach. We propose a modularized zero-shot network that explicitly decomposes questions into sub reasoning steps and is highly interpretable. We convert sub reasoning tasks to acceptable objectives of PTMs and assign tasks to proper PTMs without any adaptation. Our experiments on two VQA benchmarks under the zero-shot setting demonstrate the effectiveness of our method and better interpretability compared with several baselines.
翻译:大规模预训练模型展现出强大的零样本能力。本文研究如何利用它们实现零样本视觉问答。我们的方法基于以下几点观察:首先,VQA问题通常需要多步推理,而这正是大多数预训练模型所欠缺的能力;其次,VQA推理链中的不同步骤需要不同技能,如目标检测和关系推理,而单一预训练模型可能无法同时具备所有技能;第三,近期零样本VQA工作未显式考虑多步推理链,与基于分解的方法相比可解释性较差。我们提出模块化零样本网络,将问题显式分解为子推理步骤,具有高度可解释性。我们将子推理任务转化为预训练模型可接受的目标,并在无需任何适配的情况下将任务分配给合适的预训练模型。在两个VQA基准上的零样本实验表明,与若干基线方法相比,我们的方法兼具有效性与更优的可解释性。