Mobile robotic chemists are a fast growing trend in the field of chemistry and materials research. However, so far these mobile robots lack workflow awareness skills. This poses the risk that even a small anomaly, such as an improperly capped sample vial could disrupt the entire workflow. This wastes time, and resources, and could pose risks to human researchers, such as exposure to toxic materials. Existing perception mechanisms can be used to predict anomalies but they often generate excessive false positives. This may halt workflow execution unnecessarily, requiring researchers to intervene and to resume the workflow when no problem actually exists, negating the benefits of autonomous operation. To address this problem, we propose PREVENT a system comprising navigation and manipulation skills based on a multimodal Behavior Tree (BT) approach that can be integrated into existing software architectures with minimal modifications. Our approach involves a hierarchical perception mechanism that exploits AI techniques and sensory feedback through Dexterous Vision and Navigational Vision cameras and an IoT gas sensor module for execution-related decision-making. Experimental evaluations show that the proposed approach is comparatively efficient and completely avoids both false negatives and false positives when tested in simulated risk scenarios within our robotic chemistry workflow. The results also show that the proposed multi-modal perception skills achieved deployment accuracies that were higher than the average of the corresponding uni-modal skills, both for navigation and for manipulation.
翻译:移动化学机器人是化学与材料研究领域快速发展的趋势。然而,目前这类移动机器人缺乏工作流感知能力。这导致即使微小的异常(如样品瓶盖未正确密封)也可能中断整个工作流程,造成时间与资源浪费,并对人类研究者构成潜在风险(如接触有毒物质)。现有感知机制虽可预测异常,但常产生过多误报。这可能导致工作流被不必要地中断,需要研究人员介入并在无实际问题时恢复流程,从而削弱自主操作的优势。为解决该问题,我们提出PREVENT系统,该系统基于多模态行为树(BT)方法构建导航与操作技能,能以最小修改集成至现有软件架构。我们的方法采用分层感知机制,通过灵巧视觉摄像头、导航视觉摄像头及物联网气体传感器模块,结合人工智能技术与传感器反馈进行执行决策。实验评估表明,在机器人化学工作流的模拟风险场景测试中,所提方法相对高效,且完全避免了误报与漏报。结果还显示,所提出的多模态感知技能在导航与操作任务中均实现了高于对应单模态技能平均水平的部署准确率。