The implementation of collective motion, traditionally, disregard the limited sensing capabilities of an individual, to instead assuming an omniscient perception of the environment. This study implements a visual flocking model in a ``robot-in-the-loop'' approach to reproduce these behaviors with a flock composed of 10 independent spherical robots. The model achieves robotic collective motion by only using panoramic visual information of each robot, such as retinal position, optical size and optic flow of the neighboring robots. We introduce a virtual anchor to confine the collective robotic movements so to avoid wall interactions. For the first time, a simple visual robot-in-the-loop approach succeed in reproducing several collective motion phases, in particular, swarming, and milling. Another milestone achieved with by this model is bridging the gap between simulation and physical experiments by demonstrating nearly identical behaviors in both environments with the same visual model. To conclude, we show that our minimal visual collective motion model is sufficient to recreate most collective behaviors on a robot-in-the-loop system that is scalable, behaves as numerical simulations predict and is easily comparable to traditional models.
翻译:传统上,集体运动的实现往往忽视个体有限的感知能力,转而假设其对环境具有全知感知。本研究采用“机器人闭环”方法实现了一种视觉集群模型,通过由10个独立球形机器人组成的集群来复现这些行为。该模型仅利用每个机器人的全景视觉信息(如邻近机器人的视网膜位置、光学尺寸和光流)来实现机器人集体运动。我们引入了一个虚拟锚点来约束集体机器人运动,从而避免与墙壁的相互作用。首次,一种简单的视觉机器人闭环方法成功复现了多种集体运动阶段,特别是集群和涡旋。该模型实现的另一个里程碑是通过在物理实验和仿真环境中使用相同的视觉模型展示近乎一致的行为,从而弥合了仿真与物理实验之间的鸿沟。总之,我们表明,我们的最小化视觉集体运动模型足以在机器人闭环系统中重建大多数集体行为,该系统具有可扩展性,其行为符合数值模拟预测,且易于与传统模型进行比较。