Object affordance reasoning, the ability to infer object functionalities based on physical properties, is fundamental for task-oriented planning and activities in both humans and Artificial Intelligence (AI). This capability, required for planning and executing daily activities in a task-oriented manner, relies on commonsense knowledge of object physics and functionalities, extending beyond simple object recognition. Current computational models for affordance reasoning from perception lack generalizability, limiting their applicability in novel scenarios. Meanwhile, comprehensive Large Language Models (LLMs) with emerging reasoning capabilities are challenging to deploy on local devices for task-oriented manipulations. Here, we introduce LVIS-Aff, a large-scale dataset comprising 1,496 tasks and 119k images, designed to enhance the generalizability of affordance reasoning from perception. Utilizing this dataset, we develop Afford-X, an end-to-end trainable affordance reasoning model that incorporates Verb Attention and Bi-Fusion modules to improve multi-modal understanding. This model achieves up to a 12.1% performance improvement over the best-reported results from non-LLM methods, while also demonstrating a 1.2% enhancement compared to our previous conference paper. Additionally, it maintains a compact 187M parameter size and infers nearly 50 times faster than the GPT-4V API. Our work demonstrates the potential for efficient, generalizable affordance reasoning models that can be deployed on local devices for task-oriented manipulations. We showcase Afford-X's effectiveness in enabling task-oriented manipulations for robots across various tasks and environments, underscoring its efficiency and broad implications for advancing robotics and AI systems in real-world applications.
翻译:物体可供性推理——即基于物理属性推断物体功能的能力——对于人类和人工智能(AI)中的任务导向规划与活动至关重要。这种以任务为导向规划和执行日常活动所需的能力,依赖于对物体物理特性和功能的常识性知识,超越了简单的物体识别。当前基于感知的可供性推理计算模型缺乏泛化能力,限制了其在新场景中的应用。同时,具备新兴推理能力的综合性大型语言模型(LLMs)难以在本地设备上部署以支持任务导向操作。本文介绍了LVIS-Aff,一个包含1,496个任务和119k张图像的大规模数据集,旨在提升感知驱动的可供性推理的泛化能力。利用该数据集,我们开发了Afford-X——一个端到端可训练的可供性推理模型,该模型通过引入动词注意力与双融合模块来增强多模态理解能力。该模型在非LLM方法中取得了比最佳报告结果高达12.1%的性能提升,同时较我们之前的会议论文也有1.2%的改进。此外,模型保持了紧凑的1.87亿参数规模,推理速度比GPT-4V API快近50倍。我们的工作证明了高效、可泛化的可供性推理模型在本地设备部署以执行任务导向操作的潜力。我们展示了Afford-X在多种任务和环境中赋能机器人进行任务导向操作的有效性,凸显了其高效性以及对推动机器人和AI系统在实际应用中发展的广泛意义。