Visual Prompting is a technique for teaching models to perform a visual task via in-context examples, without any additional training. In this work, we analyze the activations of MAE-VQGAN, a recent Visual Prompting model, and find task vectors, activations that encode task-specific information. Equipped with this insight, we demonstrate that it is possible to identify the task vectors and use them to guide the network towards performing different tasks without providing any input-output examples. To find task vectors, we compute the average intermediate activations per task and use the REINFORCE algorithm to search for the subset of task vectors. The resulting task vectors guide the model towards performing a task better than the original model without the need for input-output examples.
翻译:视觉提示是一种通过上下文示例教导模型执行视觉任务的技术,无需任何额外训练。在本研究中,我们分析了近期视觉提示模型MAE-VQGAN的激活状态,发现了编码任务特定信息的任务向量。基于这一发现,我们证明了识别任务向量并利用它们引导网络执行不同任务的可能性,而无需提供任何输入输出示例。为寻找任务向量,我们计算了每个任务的平均中间激活,并采用REINFORCE算法搜索任务向量的子集。所得任务向量能够引导模型在执行任务时优于原始模型,且无需输入输出示例。