Testing autonomous robotic manipulators is challenging due to the complex software interactions between vision and control components. A crucial element of modern robotic manipulators is the deep learning based object detection model. The creation and assessment of this model requires real world data, which can be hard to label and collect, especially when the hardware setup is not available. The current techniques primarily focus on using synthetic data to train deep neural networks (DDNs) and identifying failures through offline or online simulation-based testing. However, the process of exploiting the identified failures to uncover design flaws early on, and leveraging the optimized DNN within the simulation to accelerate the engineering of the DNN for real-world tasks remains unclear. To address these challenges, we propose the MARTENS (Manipulator Robot Testing and Enhancement in Simulation) framework, which integrates a photorealistic NVIDIA Isaac Sim simulator with evolutionary search to identify critical scenarios aiming at improving the deep learning vision model and uncovering system design flaws. Evaluation of two industrial case studies demonstrated that MARTENS effectively reveals robotic manipulator system failures, detecting 25 % to 50 % more failures with greater diversity compared to random test generation. The model trained and repaired using the MARTENS approach achieved mean average precision (mAP) scores of 0.91 and 0.82 on real-world images with no prior retraining. Further fine-tuning on real-world images for a few epochs (less than 10) increased the mAP to 0.95 and 0.89 for the first and second use cases, respectively. In contrast, a model trained solely on real-world data achieved mAPs of 0.8 and 0.75 for use case 1 and use case 2 after more than 25 epochs.
翻译:由于视觉与控制组件间复杂的软件交互,测试自主机器人操作器具有挑战性。现代机器人操作器的一个关键组成部分是基于深度学习的物体检测模型。该模型的创建与评估需要真实世界数据,而此类数据难以标注和收集,尤其在硬件设置不可用时。现有技术主要集中于使用合成数据训练深度神经网络(DNNs),并通过离线或基于仿真的在线测试来识别故障。然而,如何利用已识别的故障来早期发现设计缺陷,以及如何在仿真中利用优化后的DNN来加速面向真实世界任务的DNN工程化过程,目前尚不明确。为应对这些挑战,我们提出了MARTENS(仿真中操作器机器人测试与增强)框架,该框架将逼真的NVIDIA Isaac Sim仿真器与进化搜索相结合,以识别旨在改进深度学习视觉模型并揭示系统设计缺陷的关键场景。对两个工业案例研究的评估表明,MARTENS能有效揭示机器人操作器系统故障,与随机测试生成相比,检测到的故障数量多出25%至50%,且多样性更高。使用MARTENS方法训练和修复的模型,在未经任何重新训练的情况下,在真实世界图像上取得了0.91和0.82的平均精度均值(mAP)分数。随后在真实世界图像上进行少量轮次(少于10轮)的微调,分别将第一个和第二个用例的mAP提升至0.95和0.89。相比之下,仅使用真实世界数据训练的模型,在超过25轮训练后,用例1和用例2的mAP分别为0.8和0.75。