It has long been realized that neuromorphic hardware offers benefits for the domain of robotics such as low energy, low latency, as well as unique methods of learning. In aiming for more complex tasks, especially those incorporating multimodal data, one hurdle continuing to prevent their realization is an inability to orchestrate multiple networks on neuromorphic hardware without resorting to off-chip process management logic. To address this, we show a first example of a pipeline for vision-based robot control in which numerous complex networks can be run entirely on hardware via the use of a spiking neural state machine for process orchestration. The pipeline is validated on the Intel Loihi 2 research chip. We show that all components can run concurrently on-chip in the milli Watt regime at latencies competitive with the state-of-the-art. An equivalent network on simulated hardware is shown to accomplish robotic arm plug insertion in simulation, and the core elements of the pipeline are additionally tested on a real robotic arm.
翻译:长久以来,人们已认识到神经形态硬件在机器人领域具有诸多优势,如低能耗、低延迟以及独特的学习方法。然而,在追求执行更复杂的任务(尤其是涉及多模态数据的任务)时,一个持续阻碍其实现的障碍是:无法在不依赖片外进程管理逻辑的情况下,在神经形态硬件上协调运行多个网络。为解决这一问题,我们首次展示了一个基于视觉的机器人控制流程示例,该流程通过使用脉冲神经状态机进行进程协调,使得多个复杂网络能够完全在硬件上运行。该流程在英特尔Loihi 2研究芯片上得到了验证。我们证明,所有组件均能以毫瓦级功耗在片上并发运行,其延迟性能可与现有先进技术相媲美。在模拟硬件上运行的等效网络被证明能够在仿真环境中完成机械臂插头插入任务,并且该流程的核心组件还在真实机械臂上进行了额外测试。