Foundation models and large language models have shown immense human-like understanding and capabilities for generating text and digital media. However, foundation models that can freely sense, interact, and actuate the physical world like in the digital domain is far from being realized. This is due to a number of challenges including: 1) being constrained to the types of static devices and sensors deployed, 2) events often being localized to one part of a large space, and 3) requiring dense and deployments of devices to achieve full coverage. As a critical step towards enabling foundation models to successfully and freely interact with the physical environment, we propose RASP, a modular and reconfigurable sensing and actuation platform that allows drones to autonomously swap onboard sensors and actuators in only $25$ seconds, allowing a single drone to quickly adapt to a diverse range of tasks. We demonstrate through real smart home deployments that RASP enables FMs and LLMs to complete diverse tasks up to $85\%$ more successfully by allowing them to target specific areas with specific sensors and actuators on-the-fly.
翻译:基础模型与大型语言模型已在文本和数字媒体生成方面展现出类人的强大理解与能力。然而,能够像在数字领域中那样自由感知、交互并驱动物理世界的基础模型还远未实现。这主要源于以下几方面挑战:1)受限于已部署的静态设备与传感器类型;2)事件常局限于大空间中的某个局部区域;3)需要密集部署设备才能实现全面覆盖。作为推动基础模型成功且自由地与物理环境交互的关键一步,我们提出RASP——一种模块化、可重构的感知与驱动平台,该平台使无人机能在仅$25$秒内自主更换机载传感器与执行器,从而让单架无人机快速适应多种任务。通过实际智能家居部署验证,RASP通过让基础模型和大型语言模型能够动态针对特定区域选用特定传感器与执行器,使其多样化任务完成成功率提升高达$85\%$。