Background: Robot-assisted minimally invasive surgery (RMIS) research increasingly relies on multimodal data, yet access to proprietary robot telemetry remains a major barrier. We introduce MiDAS, an open-source, platform-agnostic system enabling time-synchronized, non-invasive multimodal data acquisition across surgical robotic platforms. Methods: MiDAS integrates electromagnetic and RGB-D hand tracking, foot pedal sensing, and surgical video capturing without requiring proprietary robot interfaces. We validated MiDAS on the open-source Raven-II and the clinical da Vinci Xi by collecting multimodal datasets of peg transfer and hernia repair suturing tasks performed by surgical residents. Correlation analysis and downstream gesture recognition experiments were conducted. Results: External hand and foot sensing closely approximated internal robot kinematics and non-invasive motion signals achieved gesture recognition performance comparable to proprietary telemetry. Conclusion: MiDAS enables reproducible multimodal RMIS data collection and is released with annotated datasets, including the first multimodal dataset capturing hernia repair suturing on high-fidelity simulation models.
翻译:背景:机器人辅助微创手术(RMIS)研究日益依赖多模态数据,然而获取专有机器人遥测数据仍是主要障碍。我们推出了MiDAS,这是一个开源、平台无关的系统,能够在不同手术机器人平台上实现时间同步、非侵入式的多模态数据采集。方法:MiDAS集成了电磁与RGB-D手部追踪、脚踏板传感以及手术视频捕获功能,无需依赖专有机器人接口。我们通过在开源Raven-II和临床用da Vinci Xi机器人上,采集外科住院医师执行穿环传递和疝修补缝合任务的多模态数据集,验证了MiDAS的有效性。我们进行了相关性分析和下游手势识别实验。结果:外部手部与足部传感数据与内部机器人运动学高度吻合,非侵入式运动信号实现的手势识别性能与专有遥测数据相当。结论:MiDAS实现了可复现的多模态RMIS数据采集,并随附带标注的数据集一同发布,其中包括首个在高保真仿真模型上捕获疝修补缝合任务的多模态数据集。